url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.79B
node_id
stringlengths
18
32
number
int64
1
6.01k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
comments
sequence
created_at
int64
1,587B
1,689B
updated_at
int64
1,588B
1,689B
closed_at
int64
1,587B
1,689B
βŒ€
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
float64
0
1
βŒ€
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5694/comments
https://api.github.com/repos/huggingface/datasets/issues/5694/events
https://github.com/huggingface/datasets/issues/5694
1,650,467,793
I_kwDODunzps5iYCPR
5,694
Dataset configuration
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
[ "Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to modify it in the UI on HF, and for validation on commit", "From internal discussions we agreed to go with the YAML approach, since it's the one that seems more appropriate to be modified by a human on the Hub or locally (while JSON e.g. for models are usually created programmatically).", "Current format:\r\n```yaml\r\nbuilder_config:\r\n data_files:\r\n - split: train\r\n pattern: data/train-*\r\n```" ]
1,680,354,485,000
1,680,620,077,000
null
MEMBER
null
Following discussions from https://github.com/huggingface/datasets/pull/5331 We could have something like `config.json` to define the configuration of a dataset. ```json { "data_dir": "data" "data_files": { "train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*" } } ``` we could also support a list for several configs with a 'config_name' field. The alternative was to use YAML in the README.md. I think it could also support a `dataset_type` field to specify which dataset builder class to use, and the other parameters would be the builder's parameters. Some parameters exist for all builders like `data_files` and `data_dir`, but some parameters are builder specific like `sep` for csv. This format would be used in `push_to_hub` to be able to push multiple configs. cc @huggingface/datasets EDIT: actually we're going for the YAML approach in README.md
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5694/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5694/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5693/comments
https://api.github.com/repos/huggingface/datasets/issues/5693/events
https://github.com/huggingface/datasets/pull/5693
1,649,934,749
PR_kwDODunzps5NYdPS
5,693
[docs] Split pattern search order
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007841 / 0.011353 (-0.003512) | 0.005640 / 0.011008 (-0.005368) | 0.096465 / 0.038508 (0.057957) | 0.036476 / 0.023109 (0.013367) | 0.306431 / 0.275898 (0.030533) | 0.339545 / 0.323480 (0.016065) | 0.006064 / 0.007986 (-0.001922) | 0.004404 / 0.004328 (0.000076) | 0.073130 / 0.004250 (0.068879) | 0.052765 / 0.037052 (0.015713) | 0.309895 / 0.258489 (0.051406) | 0.354037 / 0.293841 (0.060196) | 0.037127 / 0.128546 (-0.091420) | 0.012387 / 0.075646 (-0.063260) | 0.333503 / 0.419271 (-0.085769) | 0.059799 / 0.043533 (0.016266) | 0.305496 / 0.255139 (0.050358) | 0.324122 / 0.283200 (0.040922) | 0.107007 / 0.141683 (-0.034676) | 1.416743 / 1.452155 (-0.035411) | 1.520772 / 1.492716 (0.028055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261233 / 0.018006 (0.243227) | 0.573806 / 0.000490 (0.573316) | 0.000390 / 0.000200 (0.000190) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027672 / 0.037411 (-0.009740) | 0.112803 / 0.014526 (0.098278) | 0.121085 / 0.176557 (-0.055471) | 0.176056 / 0.737135 (-0.561080) | 0.127171 / 0.296338 (-0.169167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414756 / 0.215209 (0.199547) | 4.148743 / 2.077655 (2.071088) | 1.883940 / 1.504120 (0.379820) | 1.698771 / 1.541195 (0.157576) | 1.811926 / 1.468490 (0.343436) | 0.708293 / 4.584777 (-3.876484) | 3.780456 / 3.745712 (0.034744) | 2.098556 / 5.269862 (-3.171306) | 1.323512 / 4.565676 (-3.242164) | 0.086253 / 0.424275 (-0.338022) | 0.012587 / 0.007607 (0.004980) | 0.514824 / 0.226044 (0.288779) | 5.157415 / 2.268929 (2.888487) | 2.382519 / 55.444624 (-53.062105) | 2.014539 / 6.876477 (-4.861938) | 2.215239 / 2.142072 (0.073166) | 0.847178 / 4.805227 (-3.958049) | 0.170053 / 6.500664 (-6.330611) | 0.066461 / 0.075469 (-0.009008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199056 / 1.841788 (-0.642732) | 15.244999 / 8.074308 (7.170691) | 14.661593 / 10.191392 (4.470201) | 0.168855 / 0.680424 (-0.511569) | 0.017889 / 0.534201 (-0.516312) | 0.424961 / 0.579283 (-0.154322) | 0.428632 / 0.434364 (-0.005732) | 0.502680 / 0.540337 (-0.037658) | 0.597827 / 1.386936 (-0.789109) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005527 / 0.011008 (-0.005482) | 0.074774 / 0.038508 (0.036266) | 0.035367 / 0.023109 (0.012258) | 0.340594 / 0.275898 (0.064696) | 0.373970 / 0.323480 (0.050490) | 0.006094 / 0.007986 (-0.001892) | 0.004428 / 0.004328 (0.000100) | 0.074120 / 0.004250 (0.069869) | 0.054852 / 0.037052 (0.017800) | 0.357173 / 0.258489 (0.098684) | 0.388877 / 0.293841 (0.095036) | 0.037002 / 0.128546 (-0.091545) | 0.012337 / 0.075646 (-0.063309) | 0.086962 / 0.419271 (-0.332310) | 0.050370 / 0.043533 (0.006837) | 0.342989 / 0.255139 (0.087850) | 0.358065 / 0.283200 (0.074865) | 0.111063 / 0.141683 (-0.030620) | 1.516704 / 1.452155 (0.064549) | 1.634359 / 1.492716 (0.141643) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261493 / 0.018006 (0.243487) | 0.566288 / 0.000490 (0.565799) | 0.000439 / 0.000200 (0.000239) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030426 / 0.037411 (-0.006985) | 0.114606 / 0.014526 (0.100080) | 0.126134 / 0.176557 (-0.050423) | 0.175324 / 0.737135 (-0.561812) | 0.132766 / 0.296338 (-0.163573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426785 / 0.215209 (0.211576) | 4.243555 / 2.077655 (2.165900) | 2.089631 / 1.504120 (0.585511) | 1.994562 / 1.541195 (0.453367) | 2.140284 / 1.468490 (0.671794) | 0.698645 / 4.584777 (-3.886132) | 3.807471 / 3.745712 (0.061759) | 3.275343 / 5.269862 (-1.994519) | 1.796756 / 4.565676 (-2.768921) | 0.085986 / 0.424275 (-0.338289) | 0.012213 / 0.007607 (0.004606) | 0.536815 / 0.226044 (0.310771) | 5.344611 / 2.268929 (3.075683) | 2.498578 / 55.444624 (-52.946047) | 2.153260 / 6.876477 (-4.723217) | 2.251310 / 2.142072 (0.109237) | 0.839104 / 4.805227 (-3.966123) | 0.169639 / 6.500664 (-6.331025) | 0.065880 / 0.075469 (-0.009589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268610 / 1.841788 (-0.573178) | 15.624915 / 8.074308 (7.550606) | 15.163684 / 10.191392 (4.972292) | 0.172992 / 0.680424 (-0.507432) | 0.018154 / 0.534201 (-0.516047) | 0.440485 / 0.579283 (-0.138798) | 0.431949 / 0.434364 (-0.002415) | 0.547935 / 0.540337 (0.007597) | 0.662442 / 1.386936 (-0.724494) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5c8a6ba43c4aaa0ca0665d8dadd87ef33e28e8e4 \"CML watermark\")\n" ]
1,680,292,298,000
1,680,547,410,000
1,680,546,598,000
MEMBER
null
This PR addresses #5681 about the order of split patterns πŸ€— Datasets searches for when generating dataset splits.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5693/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5693", "html_url": "https://github.com/huggingface/datasets/pull/5693", "diff_url": "https://github.com/huggingface/datasets/pull/5693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5693.patch", "merged_at": "2023-04-03T18:29:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5692/comments
https://api.github.com/repos/huggingface/datasets/issues/5692/events
https://github.com/huggingface/datasets/issues/5692
1,649,818,644
I_kwDODunzps5iVjwU
5,692
pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types
{ "login": "cyanic-selkie", "id": 32219669, "node_id": "MDQ6VXNlcjMyMjE5NjY5", "avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyanic-selkie", "html_url": "https://github.com/cyanic-selkie", "followers_url": "https://api.github.com/users/cyanic-selkie/followers", "following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}", "gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions", "organizations_url": "https://api.github.com/users/cyanic-selkie/orgs", "repos_url": "https://api.github.com/users/cyanic-selkie/repos", "events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}", "received_events_url": "https://api.github.com/users/cyanic-selkie/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?", "> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n" ]
1,680,286,780,000
1,680,619,110,000
null
NONE
null
### Describe the bug When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error: ``` Traceback (most recent call last): File "/home/sven/code/rector/answer-detection/train.py", line 106, in <module> (dataset, weights) = get_dataset(args.dataset, tokenizer, labels, args.padding) File "/home/sven/code/rector/answer-detection/dataset.py", line 106, in get_dataset dataset = load_dataset("cyanic-selkie/wikianc-en") File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/load.py", line 1794, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1106, in as_dataset datasets = map_nested( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 443, in map_nested mapped = [ File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1136, in _build_single_dataset ds = self._as_dataset( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1207, in _as_dataset dataset_kwargs = ArrowReader(cache_dir, self.info).read( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 239, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 260, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 203, in _read_files pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0] File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1808, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1514, in from_tables return cls.from_blocks(blocks) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1427, in from_blocks table = cls._concat_blocks(blocks, axis=0) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1373, in _concat_blocks return pa.concat_tables(pa_tables, promote=True) File "pyarrow/table.pxi", line 5224, in pyarrow.lib.concat_tables File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Unable to merge: Field paragraph_anchors has incompatible types: list<: struct<start: uint32 not null, end: uint32 not null, qid: uint32, pageid: uint32, title: string not null> not null> vs list<item: struct<start: uint32, end: uint32, qid: uint32, pageid: uint32, title: string>> ``` This only happens when I load the `train` split, indicating that the size of the dataset is the deciding factor. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cyanic-selkie/wikianc-en", split="train") ``` ### Expected behavior The dataset should load normally without any errors. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-6.2.8-arch1-1-x86_64-with-glibc2.37 - Python version: 3.10.10 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5692/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5691/comments
https://api.github.com/repos/huggingface/datasets/issues/5691/events
https://github.com/huggingface/datasets/pull/5691
1,649,737,526
PR_kwDODunzps5NX08d
5,691
[docs] Compress data files
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "[Confirmed](https://huggingface.slack.com/archives/C02EMARJ65P/p1680541667004199) with the Hub team the file size limit for the Hugging Face Hub is 10MB :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004935 / 0.011008 (-0.006073) | 0.096796 / 0.038508 (0.058288) | 0.032485 / 0.023109 (0.009376) | 0.335342 / 0.275898 (0.059444) | 0.354999 / 0.323480 (0.031519) | 0.005467 / 0.007986 (-0.002519) | 0.005267 / 0.004328 (0.000939) | 0.073988 / 0.004250 (0.069737) | 0.044402 / 0.037052 (0.007350) | 0.331156 / 0.258489 (0.072666) | 0.363595 / 0.293841 (0.069754) | 0.035301 / 0.128546 (-0.093245) | 0.012141 / 0.075646 (-0.063505) | 0.333164 / 0.419271 (-0.086107) | 0.048818 / 0.043533 (0.005286) | 0.331458 / 0.255139 (0.076319) | 0.343567 / 0.283200 (0.060367) | 0.094963 / 0.141683 (-0.046720) | 1.444383 / 1.452155 (-0.007772) | 1.520093 / 1.492716 (0.027377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212311 / 0.018006 (0.194305) | 0.436413 / 0.000490 (0.435923) | 0.000333 / 0.000200 (0.000133) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026670 / 0.037411 (-0.010742) | 0.105774 / 0.014526 (0.091248) | 0.115796 / 0.176557 (-0.060760) | 0.176504 / 0.737135 (-0.560631) | 0.121883 / 0.296338 (-0.174456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400783 / 0.215209 (0.185574) | 4.006608 / 2.077655 (1.928953) | 1.817659 / 1.504120 (0.313539) | 1.619777 / 1.541195 (0.078582) | 1.684247 / 1.468490 (0.215757) | 0.701116 / 4.584777 (-3.883661) | 3.684056 / 3.745712 (-0.061656) | 2.065258 / 5.269862 (-3.204603) | 1.425460 / 4.565676 (-3.140217) | 0.084519 / 0.424275 (-0.339757) | 0.011949 / 0.007607 (0.004342) | 0.496793 / 0.226044 (0.270749) | 4.978864 / 2.268929 (2.709935) | 2.303388 / 55.444624 (-53.141237) | 1.978341 / 6.876477 (-4.898135) | 2.055744 / 2.142072 (-0.086329) | 0.832022 / 4.805227 (-3.973206) | 0.164715 / 6.500664 (-6.335949) | 0.062701 / 0.075469 (-0.012768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.178723 / 1.841788 (-0.663065) | 14.583986 / 8.074308 (6.509678) | 14.189402 / 10.191392 (3.998010) | 0.183867 / 0.680424 (-0.496557) | 0.017565 / 0.534201 (-0.516636) | 0.421345 / 0.579283 (-0.157938) | 0.420235 / 0.434364 (-0.014129) | 0.496758 / 0.540337 (-0.043580) | 0.591558 / 1.386936 (-0.795378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007019 / 0.011353 (-0.004334) | 0.004996 / 0.011008 (-0.006012) | 0.073345 / 0.038508 (0.034836) | 0.033077 / 0.023109 (0.009968) | 0.335954 / 0.275898 (0.060056) | 0.372616 / 0.323480 (0.049136) | 0.005678 / 0.007986 (-0.002308) | 0.003906 / 0.004328 (-0.000423) | 0.072841 / 0.004250 (0.068591) | 0.046829 / 0.037052 (0.009777) | 0.335177 / 0.258489 (0.076688) | 0.382862 / 0.293841 (0.089021) | 0.038406 / 0.128546 (-0.090141) | 0.012110 / 0.075646 (-0.063536) | 0.085796 / 0.419271 (-0.333476) | 0.049896 / 0.043533 (0.006363) | 0.338232 / 0.255139 (0.083093) | 0.361054 / 0.283200 (0.077855) | 0.103171 / 0.141683 (-0.038512) | 1.556692 / 1.452155 (0.104538) | 1.540023 / 1.492716 (0.047306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223705 / 0.018006 (0.205699) | 0.438771 / 0.000490 (0.438282) | 0.002838 / 0.000200 (0.002639) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028423 / 0.037411 (-0.008988) | 0.110560 / 0.014526 (0.096035) | 0.121629 / 0.176557 (-0.054928) | 0.173638 / 0.737135 (-0.563498) | 0.127062 / 0.296338 (-0.169277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425806 / 0.215209 (0.210597) | 4.251051 / 2.077655 (2.173397) | 2.059735 / 1.504120 (0.555615) | 1.864886 / 1.541195 (0.323692) | 1.941553 / 1.468490 (0.473063) | 0.700084 / 4.584777 (-3.884693) | 3.753150 / 3.745712 (0.007438) | 3.218606 / 5.269862 (-2.051256) | 1.439648 / 4.565676 (-3.126028) | 0.085239 / 0.424275 (-0.339037) | 0.012026 / 0.007607 (0.004419) | 0.521564 / 0.226044 (0.295520) | 5.217902 / 2.268929 (2.948973) | 2.557831 / 55.444624 (-52.886793) | 2.240223 / 6.876477 (-4.636254) | 2.364664 / 2.142072 (0.222591) | 0.825884 / 4.805227 (-3.979343) | 0.167800 / 6.500664 (-6.332864) | 0.063552 / 0.075469 (-0.011917) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255532 / 1.841788 (-0.586256) | 14.747783 / 8.074308 (6.673475) | 14.352263 / 10.191392 (4.160871) | 0.143659 / 0.680424 (-0.536765) | 0.017517 / 0.534201 (-0.516684) | 0.419863 / 0.579283 (-0.159421) | 0.416674 / 0.434364 (-0.017690) | 0.485694 / 0.540337 (-0.054643) | 0.584810 / 1.386936 (-0.802126) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#61db0e9c936bc67c18b37b0960e2f0bb1f8ffdcd \"CML watermark\")\n" ]
1,680,283,046,000
1,681,911,452,000
1,681,889,158,000
MEMBER
null
This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5691/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5691", "html_url": "https://github.com/huggingface/datasets/pull/5691", "diff_url": "https://github.com/huggingface/datasets/pull/5691.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5691.patch", "merged_at": "2023-04-19T07:25:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5689/comments
https://api.github.com/repos/huggingface/datasets/issues/5689/events
https://github.com/huggingface/datasets/pull/5689
1,648,956,349
PR_kwDODunzps5NVMuI
5,689
Support streaming Beam datasets from HF GCS preprocessed data
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"wikipedia\", \"20220301.en\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\nOut[2]: \r\n{'id': '12',\r\n 'url': 'https://en.wikipedia.org/wiki/Anarchism',\r\n 'title': 'Anarchism',\r\n 'text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, placed on the farthest left of the political spectrum, it is usually described alongside communalism and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement,...}\r\n```", "I love your example πŸ΄β€πŸ…°οΈ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007859 / 0.011353 (-0.003493) | 0.005129 / 0.011008 (-0.005879) | 0.098070 / 0.038508 (0.059562) | 0.036500 / 0.023109 (0.013391) | 0.311575 / 0.275898 (0.035677) | 0.338351 / 0.323480 (0.014872) | 0.005962 / 0.007986 (-0.002024) | 0.004060 / 0.004328 (-0.000268) | 0.072970 / 0.004250 (0.068719) | 0.049289 / 0.037052 (0.012237) | 0.310303 / 0.258489 (0.051814) | 0.347449 / 0.293841 (0.053608) | 0.046912 / 0.128546 (-0.081634) | 0.011952 / 0.075646 (-0.063694) | 0.333600 / 0.419271 (-0.085671) | 0.052700 / 0.043533 (0.009167) | 0.325486 / 0.255139 (0.070347) | 0.326920 / 0.283200 (0.043720) | 0.107683 / 0.141683 (-0.034000) | 1.416679 / 1.452155 (-0.035476) | 1.502418 / 1.492716 (0.009702) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216520 / 0.018006 (0.198514) | 0.448450 / 0.000490 (0.447960) | 0.004213 / 0.000200 (0.004013) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027081 / 0.037411 (-0.010331) | 0.110989 / 0.014526 (0.096463) | 0.116087 / 0.176557 (-0.060470) | 0.173771 / 0.737135 (-0.563364) | 0.121240 / 0.296338 (-0.175099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399938 / 0.215209 (0.184729) | 4.017665 / 2.077655 (1.940010) | 1.782327 / 1.504120 (0.278207) | 1.612955 / 1.541195 (0.071761) | 1.698839 / 1.468490 (0.230349) | 0.706702 / 4.584777 (-3.878075) | 4.533425 / 3.745712 (0.787713) | 2.102611 / 5.269862 (-3.167250) | 1.461429 / 4.565676 (-3.104248) | 0.085719 / 0.424275 (-0.338556) | 0.012104 / 0.007607 (0.004497) | 0.507397 / 0.226044 (0.281352) | 5.061572 / 2.268929 (2.792643) | 2.272106 / 55.444624 (-53.172518) | 1.935575 / 6.876477 (-4.940901) | 2.102541 / 2.142072 (-0.039532) | 0.838395 / 4.805227 (-3.966832) | 0.168573 / 6.500664 (-6.332091) | 0.064234 / 0.075469 (-0.011235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190077 / 1.841788 (-0.651710) | 15.765587 / 8.074308 (7.691279) | 14.694626 / 10.191392 (4.503234) | 0.142912 / 0.680424 (-0.537512) | 0.017669 / 0.534201 (-0.516532) | 0.421502 / 0.579283 (-0.157781) | 0.452732 / 0.434364 (0.018368) | 0.497480 / 0.540337 (-0.042857) | 0.586310 / 1.386936 (-0.800626) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007629 / 0.011353 (-0.003724) | 0.005330 / 0.011008 (-0.005679) | 0.076366 / 0.038508 (0.037858) | 0.034703 / 0.023109 (0.011593) | 0.356300 / 0.275898 (0.080402) | 0.392909 / 0.323480 (0.069429) | 0.005959 / 0.007986 (-0.002026) | 0.004140 / 0.004328 (-0.000188) | 0.075289 / 0.004250 (0.071039) | 0.047880 / 0.037052 (0.010828) | 0.357289 / 0.258489 (0.098800) | 0.404554 / 0.293841 (0.110714) | 0.037182 / 0.128546 (-0.091365) | 0.012266 / 0.075646 (-0.063380) | 0.088554 / 0.419271 (-0.330718) | 0.049698 / 0.043533 (0.006165) | 0.353453 / 0.255139 (0.098314) | 0.373252 / 0.283200 (0.090052) | 0.101892 / 0.141683 (-0.039791) | 1.481534 / 1.452155 (0.029380) | 1.553818 / 1.492716 (0.061102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229891 / 0.018006 (0.211884) | 0.452444 / 0.000490 (0.451954) | 0.000434 / 0.000200 (0.000234) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030170 / 0.037411 (-0.007241) | 0.115097 / 0.014526 (0.100571) | 0.122094 / 0.176557 (-0.054463) | 0.171352 / 0.737135 (-0.565784) | 0.128441 / 0.296338 (-0.167898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428347 / 0.215209 (0.213138) | 4.266243 / 2.077655 (2.188588) | 2.148327 / 1.504120 (0.644207) | 1.874141 / 1.541195 (0.332946) | 1.968737 / 1.468490 (0.500246) | 0.715320 / 4.584777 (-3.869457) | 4.166097 / 3.745712 (0.420384) | 2.169550 / 5.269862 (-3.100312) | 1.377441 / 4.565676 (-3.188236) | 0.086376 / 0.424275 (-0.337899) | 0.012018 / 0.007607 (0.004411) | 0.517433 / 0.226044 (0.291388) | 5.167327 / 2.268929 (2.898398) | 2.545822 / 55.444624 (-52.898803) | 2.241726 / 6.876477 (-4.634751) | 2.327220 / 2.142072 (0.185147) | 0.841618 / 4.805227 (-3.963609) | 0.169473 / 6.500664 (-6.331191) | 0.065505 / 0.075469 (-0.009964) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270476 / 1.841788 (-0.571312) | 17.049885 / 8.074308 (8.975577) | 14.847615 / 10.191392 (4.656223) | 0.168671 / 0.680424 (-0.511753) | 0.017564 / 0.534201 (-0.516637) | 0.424780 / 0.579283 (-0.154503) | 0.517392 / 0.434364 (0.083028) | 0.561197 / 0.540337 (0.020859) | 0.697792 / 1.386936 (-0.689144) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce06edf0afb70027ffbd3c2ddec5d28037e9bd31 \"CML watermark\")\n" ]
1,680,252,264,000
1,681,279,075,000
1,681,278,631,000
MEMBER
null
This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage: - natural_questions - wiki40b - wikipedia This is done by streaming from the prepared Arrow files in HF Google Cloud Storage. This will fix their corresponding dataset viewers. Related to: - https://github.com/huggingface/datasets-server/pull/988#discussion_r1150767138 Related to: - https://huggingface.co/datasets/natural_questions/discussions/4 - https://huggingface.co/datasets/wiki40b/discussions/2 - https://huggingface.co/datasets/wikipedia/discussions/9 CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5689/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5689/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5689", "html_url": "https://github.com/huggingface/datasets/pull/5689", "diff_url": "https://github.com/huggingface/datasets/pull/5689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5689.patch", "merged_at": "2023-04-12T05:50:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/5690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5690/comments
https://api.github.com/repos/huggingface/datasets/issues/5690/events
https://github.com/huggingface/datasets/issues/5690
1,649,289,883
I_kwDODunzps5iTiqb
5,690
raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api
{ "login": "wccccp", "id": 55964850, "node_id": "MDQ6VXNlcjU1OTY0ODUw", "avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wccccp", "html_url": "https://github.com/wccccp", "followers_url": "https://api.github.com/users/wccccp/followers", "following_url": "https://api.github.com/users/wccccp/following{/other_user}", "gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wccccp/subscriptions", "organizations_url": "https://api.github.com/users/wccccp/orgs", "repos_url": "https://api.github.com/users/wccccp/repos", "events_url": "https://api.github.com/users/wccccp/events{/privacy}", "received_events_url": "https://api.github.com/users/wccccp/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
[ "Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of `datasets`?)\r\n\r\n~@wccccp what I can suggest you is to uninstall and reinstall completely huggingface_hub and datasets? My first guess is that there is a discrepancy somewhere in your setup πŸ˜•~", "@wccccp Actually I have also been able to reproduce the error so it's not an issue with your setup.\r\n\r\n@huggingface/datasets I found this issue quite weird. Is this a module that is not used very often?\r\nThe problematic line is [this one](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L476) where `huggingface_hub.hf_api.DatasetInfo` is used. `huggingface_hub` is imported [here](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L6) as `import huggingface_hub`. However since modules are lazy-loaded in `hfh` you need to explicitly import them (i.e. `import huggingface_hub.hf_api`).\r\n\r\nWhat's weird is that nothing has changed for months. Datasets code seems that it didn't change for 2 years when I git-blame this part. And lazy-loading was introduced 1 year ago in `huggingface_hub`. Could it be that `data_files.py` is a file almost never used?\r\n", "For context, I tried to run `import huggingface_hub; huggingface_hub.hf_api.DatasetInfo` in the terminal with different versions of `hfh` and I need to go back to `huggingface_hub==0.7.0` to make it work (latest is 0.13.3).", "Before the error happens at line 120 in `data_files.py`, `datasets.filesystems.hffilesystem` is imported at the top of `data_files.py` and this file does `from huggingface_hub.hf_api import DatasetInfo` - so `huggingface_hub.hf_api` is imported. Not sure how the error could happen, what version of `datasets` are you using @wccccp ?" ]
1,680,250,942,000
1,680,537,745,000
null
NONE
null
### Describe the bug rta.sh Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ### Reproduction _No response_ ### Logs ```shell Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ``` ### System info ```shell - huggingface_hub version: 0.13.2 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/appuser/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 1.7.1 - Jinja2: N/A - Graphviz: N/A - Pydot: N/A - Pillow: 9.3.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/appuser/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/appuser/.cache/huggingface/assets - HF_TOKEN_PATH: /home/appuser/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5690/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5688/comments
https://api.github.com/repos/huggingface/datasets/issues/5688/events
https://github.com/huggingface/datasets/issues/5688
1,648,463,504
I_kwDODunzps5iQY6Q
5,688
Wikipedia download_and_prepare for GCS
{ "login": "adrianfagerland", "id": 25522531, "node_id": "MDQ6VXNlcjI1NTIyNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/25522531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adrianfagerland", "html_url": "https://github.com/adrianfagerland", "followers_url": "https://api.github.com/users/adrianfagerland/followers", "following_url": "https://api.github.com/users/adrianfagerland/following{/other_user}", "gists_url": "https://api.github.com/users/adrianfagerland/gists{/gist_id}", "starred_url": "https://api.github.com/users/adrianfagerland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adrianfagerland/subscriptions", "organizations_url": "https://api.github.com/users/adrianfagerland/orgs", "repos_url": "https://api.github.com/users/adrianfagerland/repos", "events_url": "https://api.github.com/users/adrianfagerland/events{/privacy}", "received_events_url": "https://api.github.com/users/adrianfagerland/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processing, using one of the distributed back-ends supported by Apache Beam: https://beam.apache.org/get-started/beam-overview/#apache-beam-pipeline-runners\r\n\r\nThat is, you are trying to process the source wikipedia data on your machine (not distributed) when passing `beam_runner=\"DirectRunner\"`.\r\n\r\nAs documented in the wikipedia dataset page (https://huggingface.co/datasets/wikipedia):\r\n\r\n Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n \r\n from datasets import load_dataset\r\n \r\n load_dataset(\"wikipedia\", \"20220301.en\")\r\n\r\n The list of pre-processed subsets is:\r\n - \"20220301.de\"\r\n - \"20220301.en\"\r\n - \"20220301.fr\"\r\n - \"20220301.frr\"\r\n - \"20220301.it\"\r\n - \"20220301.simple\"\r\n\r\nTo download the available processed data (in Arrow format):\r\n```python\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(your_path)\r\n```", "When running this using :\r\n```\r\nimport datasets\r\nfrom apache_beam.options.pipeline_options import PipelineOptions\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbeam_options = PipelineOptions(\r\n region=\"europe-west4\",\r\n project=\"tdt4310\",\r\n temp_location=output_dir+\"tmp/\")\r\n\r\n\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\", beam_runner=\"dataflow\", beam_options=beam_options)\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\nI now get this error:\r\n```\r\nraise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json\r\nDownloading data files: 0%| | 0/1 [00:00<?, ?it/s]\r\n```\r\n\r\nI get the same error for this:\r\n```\r\nimport datasets\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\n\r\n\r\n\r\n" ]
1,680,219,802,000
1,680,269,492,000
null
NONE
null
### Describe the bug I am unable to download the wikipedia dataset onto GCS. When I run the script provided the memory firstly gets eaten up, then it crashes. I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_ I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage. ### Steps to reproduce the bug Run this and insert a path: ``` import datasets builder = datasets.load_dataset_builder( "wikipedia", language="en", date="20230320", beam_runner="DirectRunner") builder.download_and_prepare({path}, file_format="parquet") ``` This is where the problem of it eating RAM occurs. I have also tried several versions of this, based on the docs: ``` import gcsfs import datasets storage_options = {"project": "tdt4310", "token": "cloud"} fs = gcsfs.GCSFileSystem(**storage_options) output_dir = "gcs://wikipediadata/" builder = datasets.load_dataset_builder( "wikipedia", date="20230320", language="en", beam_runner="DirectRunner") builder.download_and_prepare( output_dir, storage_options=storage_options, file_format="parquet") ``` The error message that is received here is: > ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite'] I have ran `pip install apache-beam[gcp]` ### Expected behavior The wikipedia data loaded into GCS Everything worked when testing with a smaller demo dataset found somewhere in the docs ### Environment info Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5688/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5687/comments
https://api.github.com/repos/huggingface/datasets/issues/5687/events
https://github.com/huggingface/datasets/issues/5687
1,647,009,018
I_kwDODunzps5iK1z6
5,687
Document to compress data files before uploading
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
[ "Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`, `.json`, `.jsonl`, and `txt`, we recommend compressing them before uploading to the Hub. These file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of supported file extensions.", "Hi @stevhliu, thanks for your suggestion.\r\n\r\nI agree it is a good opportunity to mention that audio/image file formats are also supported.\r\n\r\nNit:\r\nI would not mention .zip, .rar after \"text, audio, and image data extensions\". Those are \"compression\" extensions and not \"text, audio, and image data extensions\".\r\n\r\nWhat about something similar to:\r\n> We support many text, audio, and image data extensions such as `.csv`, `.mp3`, and `.jpg` among many others. For text data extensions like `.csv`, `.json`, `.jsonl`, and `.txt`, we recommend compressing them before uploading to the Hub (to `.zip` or `.gz` file extension for example). \r\n>\r\n> Note that text file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of tracked file extensions by default.\r\n\r\nNote that for compressions I have mentioned:\r\n- gz, to compress individual files\r\n- zip, to compress and archive multiple files; zip is preferred rather than tar because it supports streaming out of the box", "Perfect, thanks for making the distinction between compression and data extensions!" ]
1,680,158,467,000
1,681,889,159,000
1,681,889,159,000
MEMBER
null
In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them. I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub. - Compressed files are tracked by Git LFS in our default `.gitattributes` file What do you think? CC: @stevhliu See related issue: - https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5687/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5687/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5686/comments
https://api.github.com/repos/huggingface/datasets/issues/5686/events
https://github.com/huggingface/datasets/pull/5686
1,646,308,228
PR_kwDODunzps5NMXdu
5,686
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008460 / 0.011353 (-0.002893) | 0.006114 / 0.011008 (-0.004894) | 0.121496 / 0.038508 (0.082987) | 0.035030 / 0.023109 (0.011920) | 0.397778 / 0.275898 (0.121880) | 0.429020 / 0.323480 (0.105540) | 0.007811 / 0.007986 (-0.000174) | 0.006269 / 0.004328 (0.001940) | 0.098895 / 0.004250 (0.094645) | 0.045407 / 0.037052 (0.008355) | 0.413679 / 0.258489 (0.155189) | 0.437491 / 0.293841 (0.143650) | 0.053207 / 0.128546 (-0.075339) | 0.018471 / 0.075646 (-0.057175) | 0.414800 / 0.419271 (-0.004472) | 0.060864 / 0.043533 (0.017332) | 0.398501 / 0.255139 (0.143362) | 0.421142 / 0.283200 (0.137942) | 0.114908 / 0.141683 (-0.026775) | 1.678630 / 1.452155 (0.226475) | 1.782313 / 1.492716 (0.289596) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280783 / 0.018006 (0.262777) | 0.591573 / 0.000490 (0.591083) | 0.005797 / 0.000200 (0.005597) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030431 / 0.037411 (-0.006981) | 0.117342 / 0.014526 (0.102816) | 0.128456 / 0.176557 (-0.048101) | 0.198782 / 0.737135 (-0.538354) | 0.128501 / 0.296338 (-0.167838) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603073 / 0.215209 (0.387864) | 6.101354 / 2.077655 (4.023699) | 2.527812 / 1.504120 (1.023692) | 2.101468 / 1.541195 (0.560273) | 2.092813 / 1.468490 (0.624323) | 1.182150 / 4.584777 (-3.402627) | 5.389278 / 3.745712 (1.643566) | 5.041001 / 5.269862 (-0.228860) | 2.650581 / 4.565676 (-1.915095) | 0.138761 / 0.424275 (-0.285514) | 0.014209 / 0.007607 (0.006602) | 0.748596 / 0.226044 (0.522552) | 7.373937 / 2.268929 (5.105008) | 3.245882 / 55.444624 (-52.198742) | 2.523569 / 6.876477 (-4.352908) | 2.581343 / 2.142072 (0.439270) | 1.340436 / 4.805227 (-3.464791) | 0.241388 / 6.500664 (-6.259276) | 0.076634 / 0.075469 (0.001164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480237 / 1.841788 (-0.361551) | 16.781338 / 8.074308 (8.707030) | 19.735028 / 10.191392 (9.543636) | 0.256872 / 0.680424 (-0.423551) | 0.029211 / 0.534201 (-0.504990) | 0.503292 / 0.579283 (-0.075991) | 0.584510 / 0.434364 (0.150146) | 0.580293 / 0.540337 (0.039955) | 0.678863 / 1.386936 (-0.708073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009972 / 0.011353 (-0.001381) | 0.006107 / 0.011008 (-0.004902) | 0.096188 / 0.038508 (0.057680) | 0.033320 / 0.023109 (0.010210) | 0.420789 / 0.275898 (0.144891) | 0.460488 / 0.323480 (0.137008) | 0.006492 / 0.007986 (-0.001493) | 0.005325 / 0.004328 (0.000997) | 0.094974 / 0.004250 (0.090723) | 0.047708 / 0.037052 (0.010655) | 0.426689 / 0.258489 (0.168200) | 0.476440 / 0.293841 (0.182599) | 0.052776 / 0.128546 (-0.075770) | 0.018779 / 0.075646 (-0.056868) | 0.119598 / 0.419271 (-0.299673) | 0.061800 / 0.043533 (0.018267) | 0.421305 / 0.255139 (0.166166) | 0.441125 / 0.283200 (0.157925) | 0.114221 / 0.141683 (-0.027462) | 1.712681 / 1.452155 (0.260526) | 1.852316 / 1.492716 (0.359600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272412 / 0.018006 (0.254405) | 0.583996 / 0.000490 (0.583506) | 0.000505 / 0.000200 (0.000305) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029553 / 0.037411 (-0.007858) | 0.124921 / 0.014526 (0.110395) | 0.133338 / 0.176557 (-0.043218) | 0.193811 / 0.737135 (-0.543325) | 0.147973 / 0.296338 (-0.148365) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.595241 / 0.215209 (0.380032) | 6.012015 / 2.077655 (3.934360) | 2.611295 / 1.504120 (1.107175) | 2.290127 / 1.541195 (0.748932) | 2.300366 / 1.468490 (0.831876) | 1.197602 / 4.584777 (-3.387175) | 5.439064 / 3.745712 (1.693352) | 2.906088 / 5.269862 (-2.363773) | 1.919183 / 4.565676 (-2.646493) | 0.132166 / 0.424275 (-0.292109) | 0.014544 / 0.007607 (0.006937) | 0.726377 / 0.226044 (0.500333) | 7.361023 / 2.268929 (5.092094) | 3.289266 / 55.444624 (-52.155358) | 2.635570 / 6.876477 (-4.240907) | 2.595691 / 2.142072 (0.453619) | 1.329458 / 4.805227 (-3.475769) | 0.239419 / 6.500664 (-6.261245) | 0.076316 / 0.075469 (0.000847) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547616 / 1.841788 (-0.294172) | 17.374315 / 8.074308 (9.300007) | 20.216275 / 10.191392 (10.024883) | 0.252102 / 0.680424 (-0.428322) | 0.027535 / 0.534201 (-0.506665) | 0.524618 / 0.579283 (-0.054666) | 0.596803 / 0.434364 (0.162439) | 0.652632 / 0.540337 (0.112294) | 0.762272 / 1.386936 (-0.624664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c7d4b2f981f8cf639dcbd80f40a41aa5b1693c6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008236 / 0.011353 (-0.003117) | 0.006186 / 0.011008 (-0.004822) | 0.117852 / 0.038508 (0.079344) | 0.034711 / 0.023109 (0.011602) | 0.447564 / 0.275898 (0.171666) | 0.438727 / 0.323480 (0.115247) | 0.006576 / 0.007986 (-0.001410) | 0.005903 / 0.004328 (0.001574) | 0.094309 / 0.004250 (0.090059) | 0.042760 / 0.037052 (0.005708) | 0.393269 / 0.258489 (0.134780) | 0.438061 / 0.293841 (0.144220) | 0.059029 / 0.128546 (-0.069517) | 0.020296 / 0.075646 (-0.055350) | 0.412057 / 0.419271 (-0.007215) | 0.059808 / 0.043533 (0.016275) | 0.407243 / 0.255139 (0.152104) | 0.414290 / 0.283200 (0.131090) | 0.107701 / 0.141683 (-0.033981) | 1.671522 / 1.452155 (0.219367) | 1.775055 / 1.492716 (0.282338) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275242 / 0.018006 (0.257236) | 0.599698 / 0.000490 (0.599208) | 0.001289 / 0.000200 (0.001089) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029579 / 0.037411 (-0.007832) | 0.127249 / 0.014526 (0.112723) | 0.137431 / 0.176557 (-0.039126) | 0.220330 / 0.737135 (-0.516805) | 0.133540 / 0.296338 (-0.162798) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571989 / 0.215209 (0.356780) | 5.931503 / 2.077655 (3.853848) | 2.526646 / 1.504120 (1.022527) | 2.189476 / 1.541195 (0.648281) | 2.151935 / 1.468490 (0.683444) | 1.242440 / 4.584777 (-3.342337) | 5.599675 / 3.745712 (1.853963) | 3.242035 / 5.269862 (-2.027826) | 2.368361 / 4.565676 (-2.197315) | 0.145659 / 0.424275 (-0.278616) | 0.013813 / 0.007607 (0.006206) | 0.782495 / 0.226044 (0.556451) | 7.861619 / 2.268929 (5.592690) | 3.241001 / 55.444624 (-52.203623) | 2.611025 / 6.876477 (-4.265452) | 2.667263 / 2.142072 (0.525191) | 1.429992 / 4.805227 (-3.375235) | 0.243008 / 6.500664 (-6.257656) | 0.083686 / 0.075469 (0.008217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565526 / 1.841788 (-0.276262) | 18.260815 / 8.074308 (10.186507) | 22.586133 / 10.191392 (12.394741) | 0.231864 / 0.680424 (-0.448559) | 0.030877 / 0.534201 (-0.503324) | 0.569726 / 0.579283 (-0.009557) | 0.678638 / 0.434364 (0.244274) | 0.611810 / 0.540337 (0.071472) | 0.718771 / 1.386936 (-0.668165) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009398 / 0.011353 (-0.001955) | 0.006452 / 0.011008 (-0.004556) | 0.103352 / 0.038508 (0.064844) | 0.034773 / 0.023109 (0.011664) | 0.523782 / 0.275898 (0.247884) | 0.523554 / 0.323480 (0.200074) | 0.006990 / 0.007986 (-0.000996) | 0.004994 / 0.004328 (0.000666) | 0.102199 / 0.004250 (0.097949) | 0.050087 / 0.037052 (0.013035) | 0.496662 / 0.258489 (0.238173) | 0.563130 / 0.293841 (0.269289) | 0.052851 / 0.128546 (-0.075695) | 0.019824 / 0.075646 (-0.055822) | 0.122657 / 0.419271 (-0.296614) | 0.057714 / 0.043533 (0.014181) | 0.470502 / 0.255139 (0.215363) | 0.518908 / 0.283200 (0.235708) | 0.114374 / 0.141683 (-0.027309) | 1.795918 / 1.452155 (0.343763) | 1.957461 / 1.492716 (0.464744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303921 / 0.018006 (0.285915) | 0.584406 / 0.000490 (0.583916) | 0.000444 / 0.000200 (0.000244) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032254 / 0.037411 (-0.005158) | 0.129966 / 0.014526 (0.115440) | 0.151000 / 0.176557 (-0.025557) | 0.234060 / 0.737135 (-0.503076) | 0.149444 / 0.296338 (-0.146895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666627 / 0.215209 (0.451418) | 7.054701 / 2.077655 (4.977046) | 2.836895 / 1.504120 (1.332775) | 2.561994 / 1.541195 (1.020799) | 2.672460 / 1.468490 (1.203970) | 1.411929 / 4.584777 (-3.172848) | 6.026918 / 3.745712 (2.281206) | 3.341745 / 5.269862 (-1.928116) | 2.280317 / 4.565676 (-2.285359) | 0.156635 / 0.424275 (-0.267641) | 0.014256 / 0.007607 (0.006649) | 0.804830 / 0.226044 (0.578786) | 8.106960 / 2.268929 (5.838031) | 3.597452 / 55.444624 (-51.847172) | 3.002847 / 6.876477 (-3.873630) | 2.931160 / 2.142072 (0.789088) | 1.484172 / 4.805227 (-3.321056) | 0.254166 / 6.500664 (-6.246498) | 0.080554 / 0.075469 (0.005085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.809909 / 1.841788 (-0.031879) | 18.988994 / 8.074308 (10.914686) | 23.153442 / 10.191392 (12.962050) | 0.250554 / 0.680424 (-0.429870) | 0.048677 / 0.534201 (-0.485524) | 0.574109 / 0.579283 (-0.005174) | 0.640917 / 0.434364 (0.206553) | 0.725215 / 0.540337 (0.184878) | 0.878234 / 1.386936 (-0.508702) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3667d6e17d68503469c8e88ec344b7cccfa2346 \"CML watermark\")\n" ]
1,680,114,253,000
1,680,114,829,000
1,680,114,262,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5686/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5686", "html_url": "https://github.com/huggingface/datasets/pull/5686", "diff_url": "https://github.com/huggingface/datasets/pull/5686.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5686.patch", "merged_at": "2023-03-29T18:24:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/5685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5685/comments
https://api.github.com/repos/huggingface/datasets/issues/5685/events
https://github.com/huggingface/datasets/issues/5685
1,646,048,667
I_kwDODunzps5iHLWb
5,685
Broken Image render on the hub website
{ "login": "FrancescoSaverioZuppichini", "id": 15908060, "node_id": "MDQ6VXNlcjE1OTA4MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancescoSaverioZuppichini", "html_url": "https://github.com/FrancescoSaverioZuppichini", "followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers", "following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}", "gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions", "organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs", "repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos", "events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! \r\n\r\nYou can fix the viewer by adding the `dataset_info` YAML field deleted in https://huggingface.co/datasets/Francesco/cell-towers/commit/b95b59ddd91ebe9c12920f0efe0ed415cd0d4298 back to the metadata section of the card. \r\n\r\nTo avoid this issue in the feature, you can use `huggingface_hub`'s [RepoCard](https://huggingface.co/docs/huggingface_hub/package_reference/cards) API to update the dataset card instead of `upload_file`:\r\n```python\r\nfrom huggingface_hub import DatasetCard\r\n# Load card\r\ncard = DatasetCard.load(\"<namespace>/<repo_id>\")\r\n# Modify card content\r\ncard.content = ...\r\n# Push card to the Hub\r\ncard.push_to_hub(\"<namespace>/<repo_id>\")\r\n```\r\n\r\nHowever, the best solution would be to use the features info stored in the header of the Parquet shards generated with `push_to_hub` on the viewer side to avoid unexpected issues such as this one. This shouldn't be too hard to address.", "Thanks for reporting @FrancescoSaverioZuppichini.\r\n\r\nFor future issues with your specific dataset, you can use its \"Community\" tab to start a conversation: https://huggingface.co/datasets/Francesco/cell-towers/discussions/new", "Thanks @albertvillanova , @mariosasko I was not aware of this requirement from the doc (must have skipped :sweat_smile: )\r\n\r\nConfirmed, adding back `dataset_info` fixed the issu" ]
1,680,103,530,000
1,680,162,865,000
1,680,162,865,000
NONE
null
### Describe the bug Hi :wave: Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type ![image](https://user-images.githubusercontent.com/15908060/228587875-427a37f1-3a31-4e17-8bbe-0f759003910d.png) See this [dataset](https://huggingface.co/datasets/Francesco/cell-towers), basically for some reason the first image has numerical bytes inside, not sure if that is okay, but the image render feature **doesn't work** So the dataset is stored in the following way ```python builder.download_and_prepare(output_dir=str(output_dir)) ds = builder.as_dataset(split="train") # [NOTE] no idea how to push it from the builder folder ds.push_to_hub(repo_id=repo_id) builder.as_dataset(split="validation").push_to_hub(repo_id=repo_id) ds = builder.as_dataset(split="test") ds.push_to_hub(repo_id=repo_id) ``` The build is this class ```python class COCOLikeDatasetBuilder(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") def _info(self): features = datasets.Features( { "image_id": datasets.Value("int64"), "image": datasets.Image(), "width": datasets.Value("int32"), "height": datasets.Value("int32"), "objects": datasets.Sequence( { "id": datasets.Value("int64"), "area": datasets.Value("int64"), "bbox": datasets.Sequence( datasets.Value("float32"), length=4 ), "category": datasets.ClassLabel(names=categories), } ), } ) return datasets.DatasetInfo( description=description, features=features, homepage=homepage, license=license, citation=citation, ) def _split_generators(self, dl_manager): archive = dl_manager.download(url) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "annotation_file_path": "train/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={ "annotation_file_path": "test/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "annotation_file_path": "valid/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), ] def _generate_examples(self, annotation_file_path, files): def process_annot(annot, category_id_to_category): return { "id": annot["id"], "area": annot["area"], "bbox": annot["bbox"], "category": category_id_to_category[annot["category_id"]], } image_id_to_image = {} idx = 0 # This loop relies on the ordering of the files in the archive: # Annotation files come first, then the images. for path, f in files: file_name = os.path.basename(path) if annotation_file_path in path: annotations = json.load(f) category_id_to_category = { category["id"]: category["name"] for category in annotations["categories"] } print(category_id_to_category) image_id_to_annotations = collections.defaultdict(list) for annot in annotations["annotations"]: image_id_to_annotations[annot["image_id"]].append(annot) image_id_to_image = { annot["file_name"]: annot for annot in annotations["images"] } elif file_name in image_id_to_image: image = image_id_to_image[file_name] objects = [ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]] ] print(file_name) yield idx, { "image_id": image["id"], "image": {"path": path, "bytes": f.read()}, "width": image["width"], "height": image["height"], "objects": objects, } idx += 1 ``` Basically, I want to add to the hub every dataset I come across on coco format Thanks Fra ### Steps to reproduce the bug In this case, you can just navigate on the [dataset](https://huggingface.co/datasets/Francesco/cell-towers) ### Expected behavior I was expecting the image rendering feature to work ### Environment info Not a lot to share, I am using `datasets` from a fresh venv
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5685/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5684/comments
https://api.github.com/repos/huggingface/datasets/issues/5684/events
https://github.com/huggingface/datasets/pull/5684
1,646,013,226
PR_kwDODunzps5NLXWm
5,684
Release: 2.11.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007017 / 0.011353 (-0.004335) | 0.004917 / 0.011008 (-0.006091) | 0.098391 / 0.038508 (0.059883) | 0.032677 / 0.023109 (0.009568) | 0.312126 / 0.275898 (0.036227) | 0.352477 / 0.323480 (0.028998) | 0.005960 / 0.007986 (-0.002025) | 0.003801 / 0.004328 (-0.000528) | 0.073916 / 0.004250 (0.069666) | 0.045610 / 0.037052 (0.008557) | 0.319626 / 0.258489 (0.061137) | 0.370575 / 0.293841 (0.076734) | 0.035888 / 0.128546 (-0.092658) | 0.012012 / 0.075646 (-0.063635) | 0.338290 / 0.419271 (-0.080982) | 0.049452 / 0.043533 (0.005919) | 0.301226 / 0.255139 (0.046087) | 0.336744 / 0.283200 (0.053545) | 0.100835 / 0.141683 (-0.040847) | 1.500008 / 1.452155 (0.047853) | 1.566757 / 1.492716 (0.074041) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220668 / 0.018006 (0.202662) | 0.449273 / 0.000490 (0.448784) | 0.003861 / 0.000200 (0.003661) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026847 / 0.037411 (-0.010565) | 0.105916 / 0.014526 (0.091390) | 0.116245 / 0.176557 (-0.060312) | 0.172617 / 0.737135 (-0.564519) | 0.122846 / 0.296338 (-0.173492) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417906 / 0.215209 (0.202697) | 4.169092 / 2.077655 (2.091437) | 1.934439 / 1.504120 (0.430319) | 1.735718 / 1.541195 (0.194523) | 1.828205 / 1.468490 (0.359715) | 0.697446 / 4.584777 (-3.887331) | 3.802830 / 3.745712 (0.057118) | 3.686464 / 5.269862 (-1.583398) | 1.863924 / 4.565676 (-2.701752) | 0.086520 / 0.424275 (-0.337755) | 0.012101 / 0.007607 (0.004493) | 0.521252 / 0.226044 (0.295208) | 5.200937 / 2.268929 (2.932009) | 2.414290 / 55.444624 (-53.030334) | 2.070890 / 6.876477 (-4.805587) | 2.237693 / 2.142072 (0.095621) | 0.843417 / 4.805227 (-3.961811) | 0.167856 / 6.500664 (-6.332809) | 0.064997 / 0.075469 (-0.010472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212334 / 1.841788 (-0.629454) | 14.710632 / 8.074308 (6.636324) | 14.877489 / 10.191392 (4.686097) | 0.151268 / 0.680424 (-0.529156) | 0.018663 / 0.534201 (-0.515538) | 0.429678 / 0.579283 (-0.149605) | 0.425054 / 0.434364 (-0.009310) | 0.502804 / 0.540337 (-0.037533) | 0.587932 / 1.386936 (-0.799004) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007462 / 0.011353 (-0.003891) | 0.005307 / 0.011008 (-0.005701) | 0.074309 / 0.038508 (0.035801) | 0.033437 / 0.023109 (0.010328) | 0.355087 / 0.275898 (0.079189) | 0.391417 / 0.323480 (0.067937) | 0.005904 / 0.007986 (-0.002082) | 0.004062 / 0.004328 (-0.000266) | 0.073801 / 0.004250 (0.069550) | 0.048503 / 0.037052 (0.011451) | 0.359547 / 0.258489 (0.101058) | 0.405325 / 0.293841 (0.111484) | 0.036615 / 0.128546 (-0.091931) | 0.012185 / 0.075646 (-0.063461) | 0.086829 / 0.419271 (-0.332443) | 0.049101 / 0.043533 (0.005569) | 0.334259 / 0.255139 (0.079120) | 0.376317 / 0.283200 (0.093117) | 0.099935 / 0.141683 (-0.041748) | 1.483166 / 1.452155 (0.031011) | 1.569092 / 1.492716 (0.076375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207528 / 0.018006 (0.189521) | 0.437473 / 0.000490 (0.436983) | 0.004915 / 0.000200 (0.004715) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028632 / 0.037411 (-0.008780) | 0.111782 / 0.014526 (0.097256) | 0.122545 / 0.176557 (-0.054011) | 0.171191 / 0.737135 (-0.565945) | 0.128999 / 0.296338 (-0.167339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424422 / 0.215209 (0.209213) | 4.239488 / 2.077655 (2.161833) | 2.027969 / 1.504120 (0.523849) | 1.800667 / 1.541195 (0.259473) | 1.898701 / 1.468490 (0.430211) | 0.711453 / 4.584777 (-3.873324) | 3.766696 / 3.745712 (0.020984) | 2.107530 / 5.269862 (-3.162331) | 1.347137 / 4.565676 (-3.218540) | 0.086823 / 0.424275 (-0.337452) | 0.012137 / 0.007607 (0.004530) | 0.523143 / 0.226044 (0.297099) | 5.273434 / 2.268929 (3.004505) | 2.545463 / 55.444624 (-52.899161) | 2.246683 / 6.876477 (-4.629793) | 2.296862 / 2.142072 (0.154789) | 0.855690 / 4.805227 (-3.949538) | 0.168526 / 6.500664 (-6.332138) | 0.063392 / 0.075469 (-0.012078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248926 / 1.841788 (-0.592862) | 14.676308 / 8.074308 (6.602000) | 14.524364 / 10.191392 (4.332972) | 0.184138 / 0.680424 (-0.496286) | 0.017259 / 0.534201 (-0.516942) | 0.433875 / 0.579283 (-0.145408) | 0.416787 / 0.434364 (-0.017577) | 0.532391 / 0.540337 (-0.007947) | 0.628572 / 1.386936 (-0.758364) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3929cc227a474ce0c716146c8d14ae94f8a7625b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006469 / 0.011353 (-0.004884) | 0.004499 / 0.011008 (-0.006510) | 0.098856 / 0.038508 (0.060348) | 0.027753 / 0.023109 (0.004644) | 0.321348 / 0.275898 (0.045450) | 0.351480 / 0.323480 (0.028000) | 0.004949 / 0.007986 (-0.003036) | 0.004655 / 0.004328 (0.000327) | 0.076732 / 0.004250 (0.072482) | 0.036175 / 0.037052 (-0.000878) | 0.310111 / 0.258489 (0.051622) | 0.372427 / 0.293841 (0.078586) | 0.031947 / 0.128546 (-0.096599) | 0.011669 / 0.075646 (-0.063977) | 0.323086 / 0.419271 (-0.096186) | 0.043578 / 0.043533 (0.000045) | 0.325549 / 0.255139 (0.070410) | 0.363827 / 0.283200 (0.080627) | 0.087819 / 0.141683 (-0.053864) | 1.479429 / 1.452155 (0.027274) | 1.549797 / 1.492716 (0.057080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178502 / 0.018006 (0.160496) | 0.415954 / 0.000490 (0.415465) | 0.008767 / 0.000200 (0.008567) | 0.000429 / 0.000054 (0.000375) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023639 / 0.037411 (-0.013772) | 0.096266 / 0.014526 (0.081740) | 0.106406 / 0.176557 (-0.070151) | 0.168819 / 0.737135 (-0.568317) | 0.109158 / 0.296338 (-0.187181) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420729 / 0.215209 (0.205520) | 4.219469 / 2.077655 (2.141814) | 1.885673 / 1.504120 (0.381553) | 1.681868 / 1.541195 (0.140674) | 1.709240 / 1.468490 (0.240749) | 0.694763 / 4.584777 (-3.890014) | 3.395377 / 3.745712 (-0.350335) | 1.846811 / 5.269862 (-3.423051) | 1.158381 / 4.565676 (-3.407296) | 0.082717 / 0.424275 (-0.341558) | 0.012302 / 0.007607 (0.004695) | 0.518148 / 0.226044 (0.292103) | 5.189590 / 2.268929 (2.920661) | 2.294127 / 55.444624 (-53.150498) | 1.960080 / 6.876477 (-4.916397) | 2.045359 / 2.142072 (-0.096713) | 0.803739 / 4.805227 (-4.001488) | 0.152322 / 6.500664 (-6.348342) | 0.067051 / 0.075469 (-0.008418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206582 / 1.841788 (-0.635206) | 13.590515 / 8.074308 (5.516207) | 14.083739 / 10.191392 (3.892347) | 0.128738 / 0.680424 (-0.551686) | 0.016577 / 0.534201 (-0.517624) | 0.375499 / 0.579283 (-0.203784) | 0.383256 / 0.434364 (-0.051108) | 0.439441 / 0.540337 (-0.100896) | 0.518102 / 1.386936 (-0.868834) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006708 / 0.011353 (-0.004645) | 0.004591 / 0.011008 (-0.006417) | 0.076512 / 0.038508 (0.038004) | 0.027977 / 0.023109 (0.004868) | 0.341915 / 0.275898 (0.066017) | 0.374381 / 0.323480 (0.050901) | 0.004985 / 0.007986 (-0.003001) | 0.003374 / 0.004328 (-0.000954) | 0.075334 / 0.004250 (0.071083) | 0.037522 / 0.037052 (0.000470) | 0.341702 / 0.258489 (0.083213) | 0.384342 / 0.293841 (0.090501) | 0.032231 / 0.128546 (-0.096315) | 0.011494 / 0.075646 (-0.064153) | 0.084897 / 0.419271 (-0.334375) | 0.041914 / 0.043533 (-0.001619) | 0.342030 / 0.255139 (0.086891) | 0.371024 / 0.283200 (0.087825) | 0.089936 / 0.141683 (-0.051746) | 1.497242 / 1.452155 (0.045087) | 1.585203 / 1.492716 (0.092486) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227681 / 0.018006 (0.209674) | 0.398995 / 0.000490 (0.398505) | 0.003232 / 0.000200 (0.003032) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024705 / 0.037411 (-0.012706) | 0.099906 / 0.014526 (0.085380) | 0.106806 / 0.176557 (-0.069750) | 0.157521 / 0.737135 (-0.579614) | 0.110803 / 0.296338 (-0.185535) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457442 / 0.215209 (0.242233) | 4.580101 / 2.077655 (2.502446) | 2.094687 / 1.504120 (0.590567) | 1.880722 / 1.541195 (0.339528) | 1.938746 / 1.468490 (0.470256) | 0.700933 / 4.584777 (-3.883844) | 3.416278 / 3.745712 (-0.329434) | 2.852183 / 5.269862 (-2.417679) | 1.602659 / 4.565676 (-2.963017) | 0.083949 / 0.424275 (-0.340326) | 0.012255 / 0.007607 (0.004648) | 0.551631 / 0.226044 (0.325586) | 5.539225 / 2.268929 (3.270296) | 2.707298 / 55.444624 (-52.737326) | 2.354720 / 6.876477 (-4.521757) | 2.320790 / 2.142072 (0.178717) | 0.807152 / 4.805227 (-3.998075) | 0.152048 / 6.500664 (-6.348616) | 0.067723 / 0.075469 (-0.007746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295690 / 1.841788 (-0.546097) | 13.738082 / 8.074308 (5.663774) | 14.129549 / 10.191392 (3.938157) | 0.161568 / 0.680424 (-0.518855) | 0.016678 / 0.534201 (-0.517522) | 0.386609 / 0.579283 (-0.192674) | 0.383538 / 0.434364 (-0.050826) | 0.477872 / 0.540337 (-0.062465) | 0.564547 / 1.386936 (-0.822389) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ab4c98618bce7c1f60ce96d4a853a940ae4b250 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007247 / 0.011353 (-0.004106) | 0.005044 / 0.011008 (-0.005964) | 0.095135 / 0.038508 (0.056627) | 0.033622 / 0.023109 (0.010513) | 0.309969 / 0.275898 (0.034071) | 0.340354 / 0.323480 (0.016875) | 0.005635 / 0.007986 (-0.002351) | 0.003938 / 0.004328 (-0.000391) | 0.072089 / 0.004250 (0.067838) | 0.045592 / 0.037052 (0.008539) | 0.316620 / 0.258489 (0.058131) | 0.358174 / 0.293841 (0.064333) | 0.036446 / 0.128546 (-0.092100) | 0.011961 / 0.075646 (-0.063685) | 0.332299 / 0.419271 (-0.086973) | 0.049955 / 0.043533 (0.006422) | 0.307638 / 0.255139 (0.052499) | 0.331719 / 0.283200 (0.048519) | 0.095115 / 0.141683 (-0.046568) | 1.457960 / 1.452155 (0.005806) | 1.502812 / 1.492716 (0.010096) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223747 / 0.018006 (0.205740) | 0.444837 / 0.000490 (0.444347) | 0.002583 / 0.000200 (0.002383) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026461 / 0.037411 (-0.010951) | 0.103946 / 0.014526 (0.089420) | 0.114355 / 0.176557 (-0.062201) | 0.170076 / 0.737135 (-0.567059) | 0.121087 / 0.296338 (-0.175252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403252 / 0.215209 (0.188043) | 4.016911 / 2.077655 (1.939257) | 1.787168 / 1.504120 (0.283048) | 1.605206 / 1.541195 (0.064012) | 1.657012 / 1.468490 (0.188522) | 0.701425 / 4.584777 (-3.883352) | 3.818308 / 3.745712 (0.072596) | 3.493757 / 5.269862 (-1.776105) | 1.860534 / 4.565676 (-2.705142) | 0.084994 / 0.424275 (-0.339281) | 0.011904 / 0.007607 (0.004297) | 0.534199 / 0.226044 (0.308155) | 4.992703 / 2.268929 (2.723774) | 2.286231 / 55.444624 (-53.158393) | 1.918163 / 6.876477 (-4.958314) | 2.029811 / 2.142072 (-0.112262) | 0.837532 / 4.805227 (-3.967695) | 0.168545 / 6.500664 (-6.332119) | 0.062866 / 0.075469 (-0.012604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172862 / 1.841788 (-0.668926) | 14.966793 / 8.074308 (6.892485) | 14.202079 / 10.191392 (4.010687) | 0.144688 / 0.680424 (-0.535736) | 0.017499 / 0.534201 (-0.516702) | 0.443081 / 0.579283 (-0.136202) | 0.427496 / 0.434364 (-0.006868) | 0.525182 / 0.540337 (-0.015155) | 0.611849 / 1.386936 (-0.775087) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007264 / 0.011353 (-0.004089) | 0.005106 / 0.011008 (-0.005902) | 0.074101 / 0.038508 (0.035593) | 0.033388 / 0.023109 (0.010279) | 0.337108 / 0.275898 (0.061210) | 0.369820 / 0.323480 (0.046340) | 0.005701 / 0.007986 (-0.002284) | 0.003976 / 0.004328 (-0.000353) | 0.073517 / 0.004250 (0.069267) | 0.048741 / 0.037052 (0.011688) | 0.339118 / 0.258489 (0.080629) | 0.398687 / 0.293841 (0.104846) | 0.036661 / 0.128546 (-0.091886) | 0.012082 / 0.075646 (-0.063564) | 0.086743 / 0.419271 (-0.332529) | 0.050150 / 0.043533 (0.006617) | 0.335572 / 0.255139 (0.080433) | 0.354306 / 0.283200 (0.071107) | 0.102074 / 0.141683 (-0.039609) | 1.442911 / 1.452155 (-0.009244) | 1.531564 / 1.492716 (0.038848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183163 / 0.018006 (0.165157) | 0.439273 / 0.000490 (0.438783) | 0.002765 / 0.000200 (0.002565) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028185 / 0.037411 (-0.009227) | 0.107337 / 0.014526 (0.092811) | 0.119925 / 0.176557 (-0.056631) | 0.172120 / 0.737135 (-0.565015) | 0.124332 / 0.296338 (-0.172007) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428750 / 0.215209 (0.213541) | 4.268933 / 2.077655 (2.191279) | 2.050135 / 1.504120 (0.546015) | 1.837567 / 1.541195 (0.296372) | 1.907040 / 1.468490 (0.438549) | 0.694162 / 4.584777 (-3.890615) | 3.831542 / 3.745712 (0.085830) | 3.476580 / 5.269862 (-1.793281) | 1.855097 / 4.565676 (-2.710580) | 0.085816 / 0.424275 (-0.338459) | 0.012195 / 0.007607 (0.004588) | 0.544920 / 0.226044 (0.318876) | 5.332977 / 2.268929 (3.064049) | 2.592097 / 55.444624 (-52.852527) | 2.295411 / 6.876477 (-4.581065) | 2.330803 / 2.142072 (0.188730) | 0.833268 / 4.805227 (-3.971959) | 0.177698 / 6.500664 (-6.322966) | 0.063780 / 0.075469 (-0.011689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273361 / 1.841788 (-0.568427) | 14.981380 / 8.074308 (6.907072) | 14.395166 / 10.191392 (4.203774) | 0.186590 / 0.680424 (-0.493834) | 0.017676 / 0.534201 (-0.516525) | 0.432100 / 0.579283 (-0.147183) | 0.422490 / 0.434364 (-0.011874) | 0.531421 / 0.540337 (-0.008916) | 0.628548 / 1.386936 (-0.758388) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b16e08dd599f4646a77a5ca88b6445467e1e7e9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009005 / 0.011353 (-0.002348) | 0.005803 / 0.011008 (-0.005205) | 0.103491 / 0.038508 (0.064983) | 0.048099 / 0.023109 (0.024990) | 0.304026 / 0.275898 (0.028128) | 0.340840 / 0.323480 (0.017360) | 0.006782 / 0.007986 (-0.001204) | 0.004625 / 0.004328 (0.000296) | 0.076695 / 0.004250 (0.072445) | 0.057541 / 0.037052 (0.020489) | 0.304015 / 0.258489 (0.045526) | 0.347822 / 0.293841 (0.053981) | 0.037904 / 0.128546 (-0.090642) | 0.012686 / 0.075646 (-0.062960) | 0.368093 / 0.419271 (-0.051179) | 0.051795 / 0.043533 (0.008262) | 0.302553 / 0.255139 (0.047415) | 0.328581 / 0.283200 (0.045381) | 0.108947 / 0.141683 (-0.032736) | 1.449770 / 1.452155 (-0.002385) | 1.541944 / 1.492716 (0.049227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207529 / 0.018006 (0.189523) | 0.455313 / 0.000490 (0.454823) | 0.008276 / 0.000200 (0.008076) | 0.000322 / 0.000054 (0.000268) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030564 / 0.037411 (-0.006848) | 0.122790 / 0.014526 (0.108264) | 0.126981 / 0.176557 (-0.049576) | 0.187203 / 0.737135 (-0.549932) | 0.129931 / 0.296338 (-0.166408) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402680 / 0.215209 (0.187471) | 4.017505 / 2.077655 (1.939850) | 1.801480 / 1.504120 (0.297360) | 1.647984 / 1.541195 (0.106790) | 1.702596 / 1.468490 (0.234106) | 0.717469 / 4.584777 (-3.867308) | 3.793813 / 3.745712 (0.048101) | 2.288014 / 5.269862 (-2.981848) | 1.497545 / 4.565676 (-3.068132) | 0.091241 / 0.424275 (-0.333034) | 0.013115 / 0.007607 (0.005508) | 0.498567 / 0.226044 (0.272522) | 4.990203 / 2.268929 (2.721275) | 2.334983 / 55.444624 (-53.109642) | 2.047888 / 6.876477 (-4.828589) | 2.167825 / 2.142072 (0.025753) | 0.863769 / 4.805227 (-3.941459) | 0.172699 / 6.500664 (-6.327965) | 0.069285 / 0.075469 (-0.006184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397331 / 1.841788 (-0.444457) | 16.678240 / 8.074308 (8.603932) | 16.665143 / 10.191392 (6.473751) | 0.151011 / 0.680424 (-0.529412) | 0.018303 / 0.534201 (-0.515898) | 0.445389 / 0.579283 (-0.133894) | 0.444644 / 0.434364 (0.010280) | 0.524647 / 0.540337 (-0.015690) | 0.629747 / 1.386936 (-0.757189) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008853 / 0.011353 (-0.002499) | 0.006196 / 0.011008 (-0.004813) | 0.078595 / 0.038508 (0.040087) | 0.048348 / 0.023109 (0.025239) | 0.347038 / 0.275898 (0.071140) | 0.385807 / 0.323480 (0.062327) | 0.007047 / 0.007986 (-0.000938) | 0.004772 / 0.004328 (0.000443) | 0.076116 / 0.004250 (0.071866) | 0.058805 / 0.037052 (0.021752) | 0.345731 / 0.258489 (0.087242) | 0.401589 / 0.293841 (0.107748) | 0.039349 / 0.128546 (-0.089197) | 0.012949 / 0.075646 (-0.062697) | 0.089761 / 0.419271 (-0.329511) | 0.060001 / 0.043533 (0.016468) | 0.351587 / 0.255139 (0.096448) | 0.377708 / 0.283200 (0.094509) | 0.117391 / 0.141683 (-0.024292) | 1.471622 / 1.452155 (0.019467) | 1.568759 / 1.492716 (0.076042) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191390 / 0.018006 (0.173384) | 0.469033 / 0.000490 (0.468544) | 0.003615 / 0.000200 (0.003415) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032706 / 0.037411 (-0.004706) | 0.127095 / 0.014526 (0.112569) | 0.128755 / 0.176557 (-0.047801) | 0.182590 / 0.737135 (-0.554545) | 0.136939 / 0.296338 (-0.159400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427392 / 0.215209 (0.212183) | 4.246708 / 2.077655 (2.169053) | 2.115557 / 1.504120 (0.611437) | 2.021221 / 1.541195 (0.480026) | 2.177559 / 1.468490 (0.709069) | 0.713930 / 4.584777 (-3.870847) | 4.192467 / 3.745712 (0.446755) | 3.645437 / 5.269862 (-1.624424) | 1.964986 / 4.565676 (-2.600690) | 0.089436 / 0.424275 (-0.334839) | 0.012917 / 0.007607 (0.005310) | 0.530468 / 0.226044 (0.304423) | 5.310759 / 2.268929 (3.041831) | 2.613566 / 55.444624 (-52.831058) | 2.350443 / 6.876477 (-4.526034) | 2.385278 / 2.142072 (0.243205) | 0.862838 / 4.805227 (-3.942389) | 0.172246 / 6.500664 (-6.328418) | 0.069570 / 0.075469 (-0.005899) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310008 / 1.841788 (-0.531780) | 16.557079 / 8.074308 (8.482771) | 15.818145 / 10.191392 (5.626752) | 0.180337 / 0.680424 (-0.500087) | 0.018117 / 0.534201 (-0.516083) | 0.433189 / 0.579283 (-0.146095) | 0.429276 / 0.434364 (-0.005088) | 0.539757 / 0.540337 (-0.000580) | 0.640905 / 1.386936 (-0.746031) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b16e08dd599f4646a77a5ca88b6445467e1e7e9 \"CML watermark\")\n" ]
1,680,102,367,000
1,680,114,634,000
1,680,113,754,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5684/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5684", "html_url": "https://github.com/huggingface/datasets/pull/5684", "diff_url": "https://github.com/huggingface/datasets/pull/5684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5684.patch", "merged_at": "2023-03-29T18:15:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/5683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5683/comments
https://api.github.com/repos/huggingface/datasets/issues/5683/events
https://github.com/huggingface/datasets/pull/5683
1,646,001,197
PR_kwDODunzps5NLUq1
5,683
Fix verification_mode when ignore_verifications is passed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006935 / 0.011353 (-0.004418) | 0.004711 / 0.011008 (-0.006297) | 0.098461 / 0.038508 (0.059953) | 0.028889 / 0.023109 (0.005780) | 0.332167 / 0.275898 (0.056269) | 0.363309 / 0.323480 (0.039829) | 0.005179 / 0.007986 (-0.002807) | 0.004783 / 0.004328 (0.000455) | 0.074293 / 0.004250 (0.070043) | 0.038778 / 0.037052 (0.001726) | 0.318871 / 0.258489 (0.060382) | 0.362975 / 0.293841 (0.069134) | 0.032897 / 0.128546 (-0.095649) | 0.011685 / 0.075646 (-0.063961) | 0.322824 / 0.419271 (-0.096447) | 0.043842 / 0.043533 (0.000309) | 0.334789 / 0.255139 (0.079650) | 0.352922 / 0.283200 (0.069723) | 0.089692 / 0.141683 (-0.051991) | 1.490110 / 1.452155 (0.037955) | 1.601530 / 1.492716 (0.108813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201882 / 0.018006 (0.183875) | 0.410875 / 0.000490 (0.410385) | 0.002472 / 0.000200 (0.002272) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023636 / 0.037411 (-0.013775) | 0.102168 / 0.014526 (0.087642) | 0.107247 / 0.176557 (-0.069310) | 0.171858 / 0.737135 (-0.565278) | 0.110619 / 0.296338 (-0.185720) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433740 / 0.215209 (0.218531) | 4.332121 / 2.077655 (2.254466) | 2.075398 / 1.504120 (0.571278) | 1.941074 / 1.541195 (0.399879) | 2.033331 / 1.468490 (0.564841) | 0.697134 / 4.584777 (-3.887643) | 3.463855 / 3.745712 (-0.281857) | 3.080446 / 5.269862 (-2.189416) | 1.575020 / 4.565676 (-2.990656) | 0.083054 / 0.424275 (-0.341221) | 0.012454 / 0.007607 (0.004847) | 0.537996 / 0.226044 (0.311951) | 5.366765 / 2.268929 (3.097836) | 2.464398 / 55.444624 (-52.980227) | 2.143912 / 6.876477 (-4.732564) | 2.245706 / 2.142072 (0.103634) | 0.801397 / 4.805227 (-4.003831) | 0.150954 / 6.500664 (-6.349710) | 0.066758 / 0.075469 (-0.008711) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216412 / 1.841788 (-0.625376) | 13.679322 / 8.074308 (5.605014) | 14.055286 / 10.191392 (3.863894) | 0.130264 / 0.680424 (-0.550160) | 0.016566 / 0.534201 (-0.517635) | 0.379126 / 0.579283 (-0.200157) | 0.390815 / 0.434364 (-0.043549) | 0.437586 / 0.540337 (-0.102751) | 0.526822 / 1.386936 (-0.860114) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006898 / 0.011353 (-0.004455) | 0.004705 / 0.011008 (-0.006304) | 0.078592 / 0.038508 (0.040084) | 0.028635 / 0.023109 (0.005525) | 0.340143 / 0.275898 (0.064245) | 0.377526 / 0.323480 (0.054047) | 0.005645 / 0.007986 (-0.002340) | 0.003533 / 0.004328 (-0.000796) | 0.078441 / 0.004250 (0.074191) | 0.039408 / 0.037052 (0.002356) | 0.342303 / 0.258489 (0.083814) | 0.386837 / 0.293841 (0.092996) | 0.032427 / 0.128546 (-0.096119) | 0.011763 / 0.075646 (-0.063883) | 0.087984 / 0.419271 (-0.331287) | 0.042126 / 0.043533 (-0.001406) | 0.339951 / 0.255139 (0.084812) | 0.366165 / 0.283200 (0.082966) | 0.091414 / 0.141683 (-0.050269) | 1.502034 / 1.452155 (0.049880) | 1.597901 / 1.492716 (0.105184) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232122 / 0.018006 (0.214115) | 0.410205 / 0.000490 (0.409715) | 0.000418 / 0.000200 (0.000218) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026013 / 0.037411 (-0.011399) | 0.105520 / 0.014526 (0.090995) | 0.108649 / 0.176557 (-0.067908) | 0.159324 / 0.737135 (-0.577811) | 0.114033 / 0.296338 (-0.182306) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455634 / 0.215209 (0.240425) | 4.508544 / 2.077655 (2.430889) | 2.087065 / 1.504120 (0.582945) | 1.872622 / 1.541195 (0.331427) | 1.935617 / 1.468490 (0.467127) | 0.696909 / 4.584777 (-3.887868) | 3.449365 / 3.745712 (-0.296348) | 3.008399 / 5.269862 (-2.261462) | 1.459245 / 4.565676 (-3.106431) | 0.083637 / 0.424275 (-0.340638) | 0.012358 / 0.007607 (0.004750) | 0.547232 / 0.226044 (0.321187) | 5.522395 / 2.268929 (3.253466) | 2.691019 / 55.444624 (-52.753605) | 2.408083 / 6.876477 (-4.468394) | 2.369239 / 2.142072 (0.227166) | 0.807148 / 4.805227 (-3.998080) | 0.152030 / 6.500664 (-6.348634) | 0.067883 / 0.075469 (-0.007586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336956 / 1.841788 (-0.504832) | 14.403730 / 8.074308 (6.329422) | 14.854084 / 10.191392 (4.662692) | 0.146530 / 0.680424 (-0.533894) | 0.016611 / 0.534201 (-0.517590) | 0.398557 / 0.579283 (-0.180726) | 0.393194 / 0.434364 (-0.041170) | 0.486824 / 0.540337 (-0.053513) | 0.572844 / 1.386936 (-0.814092) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#411f9cc281e50954ea0c903e7a0a6618b3d31b9e \"CML watermark\")\n" ]
1,680,102,050,000
1,680,111,366,000
1,680,110,937,000
MEMBER
null
This PR fixes the values assigned to `verification_mode` when passing `ignore_verifications` to `load_dataset`. Related to: - #5303 Fix #5682.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5683/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5683", "html_url": "https://github.com/huggingface/datasets/pull/5683", "diff_url": "https://github.com/huggingface/datasets/pull/5683.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5683.patch", "merged_at": "2023-03-29T17:28:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5682/comments
https://api.github.com/repos/huggingface/datasets/issues/5682/events
https://github.com/huggingface/datasets/issues/5682
1,646,000,571
I_kwDODunzps5iG_m7
5,682
ValueError when passing ignore_verifications
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[]
1,680,102,030,000
1,680,110,938,000
1,680,110,938,000
MEMBER
null
When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError: ``` ValueError: 'none' is not a valid VerificationMode ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5682/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5681/comments
https://api.github.com/repos/huggingface/datasets/issues/5681/events
https://github.com/huggingface/datasets/issues/5681
1,645,630,784
I_kwDODunzps5iFlVA
5,681
Add information about patterns search order to the doc about structuring repo
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false } ]
[ "Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)", "Closed in #5693 " ]
1,680,090,289,000
1,680,546,671,000
1,680,546,671,000
CONTRIBUTOR
null
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders. I have a dΓ©jΓ  vu that it had already been discussed as some point but I don't remember....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5681/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5680/comments
https://api.github.com/repos/huggingface/datasets/issues/5680/events
https://github.com/huggingface/datasets/pull/5680
1,645,430,103
PR_kwDODunzps5NJYNz
5,680
Fix a description error for interleave_datasets.
{ "login": "QizhiPei", "id": 55624066, "node_id": "MDQ6VXNlcjU1NjI0MDY2", "avatar_url": "https://avatars.githubusercontent.com/u/55624066?v=4", "gravatar_id": "", "url": "https://api.github.com/users/QizhiPei", "html_url": "https://github.com/QizhiPei", "followers_url": "https://api.github.com/users/QizhiPei/followers", "following_url": "https://api.github.com/users/QizhiPei/following{/other_user}", "gists_url": "https://api.github.com/users/QizhiPei/gists{/gist_id}", "starred_url": "https://api.github.com/users/QizhiPei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QizhiPei/subscriptions", "organizations_url": "https://api.github.com/users/QizhiPei/orgs", "repos_url": "https://api.github.com/users/QizhiPei/repos", "events_url": "https://api.github.com/users/QizhiPei/events{/privacy}", "received_events_url": "https://api.github.com/users/QizhiPei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006772 / 0.011353 (-0.004581) | 0.004674 / 0.011008 (-0.006335) | 0.098702 / 0.038508 (0.060194) | 0.028257 / 0.023109 (0.005148) | 0.368008 / 0.275898 (0.092110) | 0.402825 / 0.323480 (0.079345) | 0.005158 / 0.007986 (-0.002828) | 0.003470 / 0.004328 (-0.000858) | 0.075541 / 0.004250 (0.071291) | 0.039755 / 0.037052 (0.002702) | 0.373431 / 0.258489 (0.114942) | 0.410159 / 0.293841 (0.116318) | 0.031355 / 0.128546 (-0.097192) | 0.011632 / 0.075646 (-0.064014) | 0.325475 / 0.419271 (-0.093797) | 0.042574 / 0.043533 (-0.000958) | 0.373629 / 0.255139 (0.118490) | 0.393921 / 0.283200 (0.110721) | 0.084669 / 0.141683 (-0.057013) | 1.459947 / 1.452155 (0.007792) | 1.529593 / 1.492716 (0.036877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189994 / 0.018006 (0.171988) | 0.409091 / 0.000490 (0.408602) | 0.003693 / 0.000200 (0.003493) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024649 / 0.037411 (-0.012762) | 0.097702 / 0.014526 (0.083177) | 0.103650 / 0.176557 (-0.072906) | 0.167141 / 0.737135 (-0.569994) | 0.108460 / 0.296338 (-0.187879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429544 / 0.215209 (0.214335) | 4.277106 / 2.077655 (2.199451) | 2.018745 / 1.504120 (0.514625) | 1.814782 / 1.541195 (0.273587) | 1.897030 / 1.468490 (0.428540) | 0.700332 / 4.584777 (-3.884445) | 3.421761 / 3.745712 (-0.323951) | 3.008281 / 5.269862 (-2.261581) | 1.554230 / 4.565676 (-3.011446) | 0.082922 / 0.424275 (-0.341353) | 0.012312 / 0.007607 (0.004705) | 0.527757 / 0.226044 (0.301713) | 5.287450 / 2.268929 (3.018522) | 2.329083 / 55.444624 (-53.115542) | 2.016651 / 6.876477 (-4.859826) | 2.214510 / 2.142072 (0.072437) | 0.807676 / 4.805227 (-3.997551) | 0.151752 / 6.500664 (-6.348912) | 0.066819 / 0.075469 (-0.008651) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239522 / 1.841788 (-0.602266) | 13.923672 / 8.074308 (5.849364) | 14.317394 / 10.191392 (4.126002) | 0.159379 / 0.680424 (-0.521045) | 0.016537 / 0.534201 (-0.517664) | 0.376808 / 0.579283 (-0.202475) | 0.376351 / 0.434364 (-0.058012) | 0.437124 / 0.540337 (-0.103213) | 0.520589 / 1.386936 (-0.866347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006892 / 0.011353 (-0.004461) | 0.004671 / 0.011008 (-0.006337) | 0.075841 / 0.038508 (0.037333) | 0.028713 / 0.023109 (0.005604) | 0.345105 / 0.275898 (0.069207) | 0.380694 / 0.323480 (0.057214) | 0.005155 / 0.007986 (-0.002830) | 0.003379 / 0.004328 (-0.000949) | 0.075134 / 0.004250 (0.070883) | 0.039990 / 0.037052 (0.002938) | 0.345540 / 0.258489 (0.087051) | 0.389913 / 0.293841 (0.096072) | 0.032089 / 0.128546 (-0.096458) | 0.011583 / 0.075646 (-0.064063) | 0.085169 / 0.419271 (-0.334102) | 0.041847 / 0.043533 (-0.001686) | 0.341504 / 0.255139 (0.086365) | 0.367582 / 0.283200 (0.084382) | 0.092684 / 0.141683 (-0.048999) | 1.498647 / 1.452155 (0.046492) | 1.549056 / 1.492716 (0.056339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228643 / 0.018006 (0.210637) | 0.410680 / 0.000490 (0.410191) | 0.000398 / 0.000200 (0.000198) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025354 / 0.037411 (-0.012057) | 0.101567 / 0.014526 (0.087041) | 0.108340 / 0.176557 (-0.068217) | 0.157804 / 0.737135 (-0.579332) | 0.113985 / 0.296338 (-0.182354) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436427 / 0.215209 (0.221218) | 4.359331 / 2.077655 (2.281676) | 2.047877 / 1.504120 (0.543757) | 1.844242 / 1.541195 (0.303047) | 1.924553 / 1.468490 (0.456063) | 0.695986 / 4.584777 (-3.888791) | 3.435571 / 3.745712 (-0.310141) | 1.905189 / 5.269862 (-3.364673) | 1.198542 / 4.565676 (-3.367134) | 0.083386 / 0.424275 (-0.340889) | 0.012442 / 0.007607 (0.004835) | 0.542562 / 0.226044 (0.316517) | 5.416554 / 2.268929 (3.147625) | 2.499496 / 55.444624 (-52.945128) | 2.160658 / 6.876477 (-4.715819) | 2.210535 / 2.142072 (0.068462) | 0.803324 / 4.805227 (-4.001903) | 0.151735 / 6.500664 (-6.348929) | 0.068392 / 0.075469 (-0.007078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319915 / 1.841788 (-0.521873) | 14.176755 / 8.074308 (6.102446) | 14.376366 / 10.191392 (4.184974) | 0.141219 / 0.680424 (-0.539204) | 0.017181 / 0.534201 (-0.517020) | 0.383589 / 0.579283 (-0.195694) | 0.389352 / 0.434364 (-0.045012) | 0.474465 / 0.540337 (-0.065873) | 0.563047 / 1.386936 (-0.823889) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c33e8ce68b5000988bf6b2e4bca27ffaa469acea \"CML watermark\")\n" ]
1,680,083,423,000
1,680,182,059,000
1,680,181,638,000
CONTRIBUTOR
null
There is a description mistake in the annotation of interleave_dataset with "all_exhausted" stopping_strategy. ``` python d1 = Dataset.from_dict({"a": [0, 1, 2]}) d2 = Dataset.from_dict({"a": [10, 11, 12, 13]}) d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]}) dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") ``` According to the interleave way, the correct output of `dataset["a"]` is `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]`, not `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5680/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5680", "html_url": "https://github.com/huggingface/datasets/pull/5680", "diff_url": "https://github.com/huggingface/datasets/pull/5680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5680.patch", "merged_at": "2023-03-30T13:07:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/5679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5679/comments
https://api.github.com/repos/huggingface/datasets/issues/5679/events
https://github.com/huggingface/datasets/issues/5679
1,645,184,622
I_kwDODunzps5iD4Zu
5,679
Allow load_dataset to take a working dir for intermediate data
{ "login": "lu-wang-dl", "id": 38018689, "node_id": "MDQ6VXNlcjM4MDE4Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lu-wang-dl", "html_url": "https://github.com/lu-wang-dl", "followers_url": "https://api.github.com/users/lu-wang-dl/followers", "following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}", "gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}", "starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions", "organizations_url": "https://api.github.com/users/lu-wang-dl/orgs", "repos_url": "https://api.github.com/users/lu-wang-dl/repos", "events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}", "received_events_url": "https://api.github.com/users/lu-wang-dl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud storage with:\r\n```python\r\nbuilder = load_dataset_builder(..., cache_dir=\"/temp/dir\")\r\nbuilder.download_and_prepare(\"/cloud_dir\")\r\n```\r\n\r\nbut then \r\n```python\r\nds = builder.as_dataset()\r\n```\r\nwould fail if \"/cloud_dir\" is not a local directory.", "In my use case, I am trying to mount the S3 bucket as local system with S3FS-FUSE / [goofys](https://github.com/kahing/goofys). I want to use S3 to save the download data and save checkpoint for training for persistent. Setting the s3 location as cache directory is not fast enough. That is why I want to set a work directory for temp data for memory map and only save the final result to s3 cache. ", "You can try setting `HF_DATASETS_DOWNLOADED_DATASETS_PATH` and `HF_DATASETS_EXTRACTED_DATASETS_PATH` to S3, and `HF_DATASETS_CACHE` to your local disk.\r\n\r\nThis way all your downloaded and extracted data are on your mounted S3, but the datasets Arrow files are on your local disk", "If we hope to also persist the Arrow files on the mounted S3 but work with the efficiency of local disk, is there any recommended way to do this, other than copying the Arrow files from local disk to S3?" ]
1,680,074,469,000
1,681,338,625,000
null
NONE
null
### Feature request As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like ``` load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”). ``` ### Motivation This will help the use case for using datasets with cloud storage as cache. It will help boost the performance. ### Your contribution I can provide a PR to fix this if the proposal seems reasonable.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5679/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5678/comments
https://api.github.com/repos/huggingface/datasets/issues/5678/events
https://github.com/huggingface/datasets/issues/5678
1,645,018,359
I_kwDODunzps5iDPz3
5,678
Add support to create a Dataset from spark dataframe
{ "login": "lu-wang-dl", "id": 38018689, "node_id": "MDQ6VXNlcjM4MDE4Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lu-wang-dl", "html_url": "https://github.com/lu-wang-dl", "followers_url": "https://api.github.com/users/lu-wang-dl/followers", "following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}", "gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}", "starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions", "organizations_url": "https://api.github.com/users/lu-wang-dl/orgs", "repos_url": "https://api.github.com/users/lu-wang-dl/repos", "events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}", "received_events_url": "https://api.github.com/users/lu-wang-dl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "if i read spark Dataframe , got an error on multi-node Spark cluster.\r\nDid the Api (Dataset.from_spark) support Spark cluster, read dataframe and save_to_disk?\r\n\r\nError: \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n\r\n", "How to perform predictions on Dataset object in Spark with multi-node cluster parallelism?" ]
1,680,064,588,000
1,686,928,561,000
null
NONE
null
### Feature request Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame. ### Motivation Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing the data in parallel. By providing a seamless integration between these two frameworks, we make it easier for data scientists and developers to work with both Spark and Hugging Face in the same workflow. ### Your contribution We can discuss about the ideas and I can help preparing a PR for this feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5678/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5678/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5677/comments
https://api.github.com/repos/huggingface/datasets/issues/5677/events
https://github.com/huggingface/datasets/issues/5677
1,644,828,606
I_kwDODunzps5iChe-
5,677
Dataset.map() crashes when any column contains more than 1000 empty dictionaries
{ "login": "destigres", "id": 7139344, "node_id": "MDQ6VXNlcjcxMzkzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/destigres", "html_url": "https://github.com/destigres", "followers_url": "https://api.github.com/users/destigres/followers", "following_url": "https://api.github.com/users/destigres/following{/other_user}", "gists_url": "https://api.github.com/users/destigres/gists{/gist_id}", "starred_url": "https://api.github.com/users/destigres/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/destigres/subscriptions", "organizations_url": "https://api.github.com/users/destigres/orgs", "repos_url": "https://api.github.com/users/destigres/repos", "events_url": "https://api.github.com/users/destigres/events{/privacy}", "received_events_url": "https://api.github.com/users/destigres/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
[]
1,680,048,091,000
1,681,496,160,000
null
NONE
null
### Describe the bug `Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty. ### Steps to reproduce the bug Example: ``` import datasets def add_one(example): example["col2"] += 1 return example n = 1001 # crashes # n = 999 # works ds = datasets.Dataset.from_dict({"col1": [{}] * n, "col2": [1] * n}) ds = ds.map(add_one, writer_batch_size=1000) ``` ### Expected behavior Above code should not crash ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5677/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5675/comments
https://api.github.com/repos/huggingface/datasets/issues/5675/events
https://github.com/huggingface/datasets/issues/5675
1,641,763,478
I_kwDODunzps5h21KW
5,675
Filter datasets by language code
{ "login": "named-entity", "id": 5658496, "node_id": "MDQ6VXNlcjU2NTg0OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/5658496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/named-entity", "html_url": "https://github.com/named-entity", "followers_url": "https://api.github.com/users/named-entity/followers", "following_url": "https://api.github.com/users/named-entity/following{/other_user}", "gists_url": "https://api.github.com/users/named-entity/gists{/gist_id}", "starred_url": "https://api.github.com/users/named-entity/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/named-entity/subscriptions", "organizations_url": "https://api.github.com/users/named-entity/orgs", "repos_url": "https://api.github.com/users/named-entity/repos", "events_url": "https://api.github.com/users/named-entity/events{/privacy}", "received_events_url": "https://api.github.com/users/named-entity/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missing from the code=>language mapping) would be much more convenient!", "Hi! I've opened a PR to make these languages searchable on the Hub.", "Thanks @mariosasko!\r\nDo you think it is possible to turn this into a more scalable pipeline? Such as:\r\n1. Looping through all the datasets on the hub and collecting the set of all their language codes;\r\n2. Selecting the codes not covered yet in `Language.ts`\r\n3. Looking up their codes at https://iso639-3.sil.org/code_tables/639/data\r\n4. Adding all the newly found language codes to `Language.ts`", "@avidale This has been discussed in https://github.com/huggingface/datasets/issues/4881, so also feel free to share your opinion there." ]
1,679,910,148,000
1,680,163,695,000
1,680,163,695,000
NONE
null
Hi! I use the language search field on https://huggingface.co/datasets However, some of the datasets tagged by ISO language code are not accessible by this search form. For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form. I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5675/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5674/comments
https://api.github.com/repos/huggingface/datasets/issues/5674/events
https://github.com/huggingface/datasets/issues/5674
1,641,084,105
I_kwDODunzps5h0PTJ
5,674
Stored XSS
{ "login": "Fadavvi", "id": 21213484, "node_id": "MDQ6VXNlcjIxMjEzNDg0", "avatar_url": "https://avatars.githubusercontent.com/u/21213484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fadavvi", "html_url": "https://github.com/Fadavvi", "followers_url": "https://api.github.com/users/Fadavvi/followers", "following_url": "https://api.github.com/users/Fadavvi/following{/other_user}", "gists_url": "https://api.github.com/users/Fadavvi/gists{/gist_id}", "starred_url": "https://api.github.com/users/Fadavvi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fadavvi/subscriptions", "organizations_url": "https://api.github.com/users/Fadavvi/orgs", "repos_url": "https://api.github.com/users/Fadavvi/repos", "events_url": "https://api.github.com/users/Fadavvi/events{/privacy}", "received_events_url": "https://api.github.com/users/Fadavvi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! You can contact `[email protected]` to report this vulnerability." ]
1,679,864,158,000
1,679,950,915,000
1,679,950,915,000
NONE
null
### Describe the bug I found a Stored XSS on a page that can be publicly accessible to all visitors. But I didn't find a suitable place to report. Please guide me on this. ### Steps to reproduce the bug Due to security restrictions, I don't want to publish it publicly. ### Expected behavior User inputs must be sanitized before rendering. ### Environment info https://huggingface.co/ Web UI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5674/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5673/comments
https://api.github.com/repos/huggingface/datasets/issues/5673/events
https://github.com/huggingface/datasets/pull/5673
1,641,066,352
PR_kwDODunzps5M6wc3
5,673
Pass down storage options
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> download_and_prepare is not called when streaming a dataset, so we may need to have storage_options in the DatasetBuilder.__init__ ? This way it could also be passed later to as_streaming_dataset and the StreamingDownloadManager\r\n\r\n> Currently the storage_options parameter in download_and_prepare are for the target filesystem where the dataset must be downloaded and prepared as arrow files\r\n\r\nAh, I noted this when looking for ways to plumb down `storage_options` although I think I was looking at adding to `BuilderConfig`. The `DatasetBuilder` constructor looks more appropriate for this, will get that added in a future commit", "Noting as experimental SGTM. The only tests I can think of to add at the moment would be mocks that assert the storage options get passed all the way down using `mock.assert_called_with` but if Hugging Face has some S3/GCS buckets for testing, maybe those would be better in a future PR. Let me know what you think", "I think adding tests with the mockfs fixture will do the job. Tests and docs can be added when request_etag and is_remote_url support fsspec (right now they would fail with mockfs).\r\n\r\nLet's see in a subsequent PR, this is exciting ! :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009217 / 0.011353 (-0.002136) | 0.006275 / 0.011008 (-0.004733) | 0.124361 / 0.038508 (0.085853) | 0.035680 / 0.023109 (0.012570) | 0.395255 / 0.275898 (0.119357) | 0.426104 / 0.323480 (0.102624) | 0.006822 / 0.007986 (-0.001163) | 0.004467 / 0.004328 (0.000138) | 0.099404 / 0.004250 (0.095153) | 0.051919 / 0.037052 (0.014867) | 0.388286 / 0.258489 (0.129797) | 0.426361 / 0.293841 (0.132520) | 0.053100 / 0.128546 (-0.075446) | 0.019453 / 0.075646 (-0.056194) | 0.433139 / 0.419271 (0.013867) | 0.063240 / 0.043533 (0.019707) | 0.381175 / 0.255139 (0.126036) | 0.411686 / 0.283200 (0.128487) | 0.104843 / 0.141683 (-0.036840) | 1.853582 / 1.452155 (0.401427) | 1.935644 / 1.492716 (0.442928) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218969 / 0.018006 (0.200963) | 0.515011 / 0.000490 (0.514522) | 0.004017 / 0.000200 (0.003818) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028975 / 0.037411 (-0.008437) | 0.125239 / 0.014526 (0.110713) | 0.131371 / 0.176557 (-0.045185) | 0.203864 / 0.737135 (-0.533271) | 0.140784 / 0.296338 (-0.155554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620701 / 0.215209 (0.405492) | 6.263557 / 2.077655 (4.185903) | 2.510058 / 1.504120 (1.005938) | 2.085892 / 1.541195 (0.544697) | 2.170362 / 1.468490 (0.701872) | 1.325600 / 4.584777 (-3.259177) | 5.583355 / 3.745712 (1.837642) | 5.092791 / 5.269862 (-0.177071) | 2.814766 / 4.565676 (-1.750911) | 0.153568 / 0.424275 (-0.270707) | 0.014850 / 0.007607 (0.007243) | 0.787011 / 0.226044 (0.560967) | 7.948813 / 2.268929 (5.679885) | 3.320831 / 55.444624 (-52.123793) | 2.526327 / 6.876477 (-4.350150) | 2.691651 / 2.142072 (0.549579) | 1.521199 / 4.805227 (-3.284028) | 0.269738 / 6.500664 (-6.230926) | 0.082959 / 0.075469 (0.007490) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.740056 / 1.841788 (-0.101732) | 17.699732 / 8.074308 (9.625424) | 22.450689 / 10.191392 (12.259297) | 0.229350 / 0.680424 (-0.451073) | 0.027486 / 0.534201 (-0.506715) | 0.536153 / 0.579283 (-0.043130) | 0.608166 / 0.434364 (0.173802) | 0.629144 / 0.540337 (0.088807) | 0.732671 / 1.386936 (-0.654265) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010147 / 0.011353 (-0.001206) | 0.006484 / 0.011008 (-0.004524) | 0.098664 / 0.038508 (0.060156) | 0.036400 / 0.023109 (0.013291) | 0.432895 / 0.275898 (0.156997) | 0.466433 / 0.323480 (0.142954) | 0.008102 / 0.007986 (0.000117) | 0.004554 / 0.004328 (0.000225) | 0.100466 / 0.004250 (0.096216) | 0.054066 / 0.037052 (0.017013) | 0.439177 / 0.258489 (0.180688) | 0.502907 / 0.293841 (0.209066) | 0.059210 / 0.128546 (-0.069336) | 0.020220 / 0.075646 (-0.055426) | 0.124671 / 0.419271 (-0.294600) | 0.064278 / 0.043533 (0.020746) | 0.435659 / 0.255139 (0.180520) | 0.459670 / 0.283200 (0.176471) | 0.115574 / 0.141683 (-0.026109) | 1.826360 / 1.452155 (0.374205) | 1.943199 / 1.492716 (0.450483) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238463 / 0.018006 (0.220457) | 0.534889 / 0.000490 (0.534400) | 0.000404 / 0.000200 (0.000204) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033210 / 0.037411 (-0.004201) | 0.133529 / 0.014526 (0.119003) | 0.143813 / 0.176557 (-0.032743) | 0.213079 / 0.737135 (-0.524056) | 0.148427 / 0.296338 (-0.147912) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.656819 / 0.215209 (0.441610) | 6.414860 / 2.077655 (4.337205) | 2.756182 / 1.504120 (1.252062) | 2.405268 / 1.541195 (0.864073) | 2.436418 / 1.468490 (0.967928) | 1.289828 / 4.584777 (-3.294949) | 5.572731 / 3.745712 (1.827018) | 3.185432 / 5.269862 (-2.084429) | 2.093220 / 4.565676 (-2.472457) | 0.144817 / 0.424275 (-0.279458) | 0.015674 / 0.007607 (0.008067) | 0.801238 / 0.226044 (0.575194) | 7.955925 / 2.268929 (5.686996) | 3.605670 / 55.444624 (-51.838955) | 2.837568 / 6.876477 (-4.038908) | 2.873848 / 2.142072 (0.731775) | 1.493512 / 4.805227 (-3.311715) | 0.266251 / 6.500664 (-6.234413) | 0.082417 / 0.075469 (0.006948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.608685 / 1.841788 (-0.233103) | 18.587875 / 8.074308 (10.513567) | 21.786119 / 10.191392 (11.594727) | 0.261748 / 0.680424 (-0.418675) | 0.026228 / 0.534201 (-0.507973) | 0.553538 / 0.579283 (-0.025745) | 0.599780 / 0.434364 (0.165416) | 0.665663 / 0.540337 (0.125325) | 0.792785 / 1.386936 (-0.594151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1520e017a9bb6f80e82a38b578213e418ad7e845 \"CML watermark\")\n" ]
1,679,861,377,000
1,680,015,818,000
1,680,015,257,000
CONTRIBUTOR
null
Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by allowing users to pass down `storage_options` all the way from `datasets.load_dataset` to support implementation-specific credentials Supports something like the following to provide credentials explicitly instead of relying on boto's methods of locating them ``` load_dataset(..., data_files=["s3://..."], storage_options={"profile": "..."}) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5673/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5673", "html_url": "https://github.com/huggingface/datasets/pull/5673", "diff_url": "https://github.com/huggingface/datasets/pull/5673.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5673.patch", "merged_at": "2023-03-28T14:54:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/5672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5672/comments
https://api.github.com/repos/huggingface/datasets/issues/5672/events
https://github.com/huggingface/datasets/issues/5672
1,641,005,322
I_kwDODunzps5hz8EK
5,672
Pushing dataset to hub crash
{ "login": "tzvc", "id": 14275989, "node_id": "MDQ6VXNlcjE0Mjc1OTg5", "avatar_url": "https://avatars.githubusercontent.com/u/14275989?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tzvc", "html_url": "https://github.com/tzvc", "followers_url": "https://api.github.com/users/tzvc/followers", "following_url": "https://api.github.com/users/tzvc/following{/other_user}", "gists_url": "https://api.github.com/users/tzvc/gists{/gist_id}", "starred_url": "https://api.github.com/users/tzvc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tzvc/subscriptions", "organizations_url": "https://api.github.com/users/tzvc/orgs", "repos_url": "https://api.github.com/users/tzvc/repos", "events_url": "https://api.github.com/users/tzvc/events{/privacy}", "received_events_url": "https://api.github.com/users/tzvc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\n\r\nIn the meantime you can install datasets from source", "Hi @lhoestq ,\r\n\r\nWhat version of datasets library fix this case? I am using the last `v2.10.1` and I get the same error.", "We just released 2.11 which includes a fix :)" ]
1,679,852,533,000
1,680,163,865,000
1,680,163,865,000
NONE
null
### Describe the bug Uploading a dataset with `push_to_hub()` fails without error description. ### Steps to reproduce the bug Hey there, I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder Now I'm trying to push it to the hub but I'm running into issues. First, I tried doing it via git directly, I added all the files in git lfs and pushed but I got hit with an error saying huggingface only accept up to 10k files in a folder. So I'm now trying with the `push_to_hub()` func as follow: ```python from datasets import load_dataset import os dataset = load_dataset("imagefolder", data_dir="./data", split="train") dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN')) ``` But again, this produces an error: ``` Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100212/100212 [00:00<00:00, 439108.61it/s] Downloading and preparing dataset imagefolder/default to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100211/100211 [00:00<00:00, 149323.73it/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 15947.92it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 2245.34it/s] Dataset imagefolder downloaded and prepared to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f. Subsequent calls will reuse this data. Resuming upload of the dataset shards. Pushing dataset shards to the dataset hub: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 14/14 [00:31<00:00, 2.24s/it] Downloading metadata: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 118/118 [00:00<00:00, 225kB/s] Traceback (most recent call last): File "/home/contact_theochampion/organization-logos/push_to_hub.py", line 5, in <module> dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN')) File "/home/contact_theochampion/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub repo_info = dataset_infos[next(iter(dataset_infos))] StopIteration ``` What could be happening here ? ### Expected behavior The dataset is pushed to the hub ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.10.0-21-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5672/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5672/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5671/comments
https://api.github.com/repos/huggingface/datasets/issues/5671/events
https://github.com/huggingface/datasets/issues/5671
1,640,840,012
I_kwDODunzps5hzTtM
5,671
How to use `load_dataset('glue', 'cola')`
{ "login": "makinzm", "id": 40193664, "node_id": "MDQ6VXNlcjQwMTkzNjY0", "avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/makinzm", "html_url": "https://github.com/makinzm", "followers_url": "https://api.github.com/users/makinzm/followers", "following_url": "https://api.github.com/users/makinzm/following{/other_user}", "gists_url": "https://api.github.com/users/makinzm/gists{/gist_id}", "starred_url": "https://api.github.com/users/makinzm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/makinzm/subscriptions", "organizations_url": "https://api.github.com/users/makinzm/orgs", "repos_url": "https://api.github.com/users/makinzm/repos", "events_url": "https://api.github.com/users/makinzm/events{/privacy}", "received_events_url": "https://api.github.com/users/makinzm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sounds like an issue with incompatible `transformers` dependencies versions.\r\n\r\nCan you try to update `transformers` ?\r\n\r\nEDIT: I checked the `transformers` dependencies and it seems like you need `tokenizers>=0.10.1,<0.11` with `transformers==4.5.1`\r\n\r\nEDIT2: this old version of `datasets` seems to import `transformers` but it's no longer the case, so you could also simply update `datasets` and `transformers` won't be imported", "Thank you for advising me to update these libraries versions.\r\n\r\nI can implement codes using `datasets==2.10.1` and `transformers==4.27.3`" ]
1,679,823,634,000
1,679,989,424,000
1,679,989,423,000
NONE
null
### Describe the bug I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`. - I was stacked by the following problem: ```python from datasets import load_dataset cola_dataset = load_dataset('glue', 'cola') --------------------------------------------------------------------------- InvalidVersion Traceback (most recent call last) File <timed exec>:1 (Omit because of long error message) File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version) 195 match = self._regex.search(version) 196 if not match: --> 197 raise InvalidVersion(f"Invalid version: '{version}'") 199 # Store the parsed out pieces of the version 200 self._version = _Version( 201 epoch=int(match.group("epoch")) if match.group("epoch") else 0, 202 release=tuple(int(i) for i in match.group("release").split(".")), (...) 208 local=_parse_local_version(match.group("local")), 209 ) InvalidVersion: Invalid version: '0.10.1,<0.11' ``` - You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb) ### Steps to reproduce the bug - This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup) 1. cd `/DockerImage` and command `docker build . -t week0` 2. cd `/` and command `docker-compose up` 3. Run `experimental_notebooks/data_exploration.ipynb` ---- Just to be sure, I wrote down Dockerfile and requirements.txt - Dockerfile ```Dockerfile FROM python:3.8 WORKDIR /root/working RUN apt-get update && \ apt-get install -y python3-dev python3-pip python3-venv && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* COPY requirements.txt . RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt CMD ["bash"] ``` - requirements.txt ```txt pytorch-lightning==1.2.10 datasets==1.6.2 transformers==4.5.1 scikit-learn==0.24.2 ``` ### Expected behavior There is no bug to implement `load_dataset('glue', 'cola')` ### Environment info I already wrote it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5671/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5670/comments
https://api.github.com/repos/huggingface/datasets/issues/5670/events
https://github.com/huggingface/datasets/issues/5670
1,640,607,045
I_kwDODunzps5hya1F
5,670
Unable to load multi class classification datasets
{ "login": "ysahil97", "id": 19690506, "node_id": "MDQ6VXNlcjE5NjkwNTA2", "avatar_url": "https://avatars.githubusercontent.com/u/19690506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ysahil97", "html_url": "https://github.com/ysahil97", "followers_url": "https://api.github.com/users/ysahil97/followers", "following_url": "https://api.github.com/users/ysahil97/following{/other_user}", "gists_url": "https://api.github.com/users/ysahil97/gists{/gist_id}", "starred_url": "https://api.github.com/users/ysahil97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysahil97/subscriptions", "organizations_url": "https://api.github.com/users/ysahil97/orgs", "repos_url": "https://api.github.com/users/ysahil97/repos", "events_url": "https://api.github.com/users/ysahil97/events{/privacy}", "received_events_url": "https://api.github.com/users/ysahil97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)", "Thanks @lhoestq!\r\n\r\nI'll close this issue now." ]
1,679,767,575,000
1,679,957,696,000
1,679,957,696,000
NONE
null
### Describe the bug I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)). While loading the dataset, I'm getting the following error snippet. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[44], line 3 1 from datasets import load_dataset ----> 3 imdb_dataset = load_dataset("yelp_review_full") 4 imdb_dataset File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1716 ignore_verifications = ignore_verifications or save_infos 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, 1722 data_dir=data_dir, 1723 data_files=data_files, 1724 cache_dir=cache_dir, 1725 features=features, 1726 download_config=download_config, 1727 download_mode=download_mode, 1728 revision=revision, 1729 use_auth_token=use_auth_token, 1730 **config_kwargs, 1731 ) 1733 # Return iterable dataset in case of streaming 1734 if streaming: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1520 raise ValueError(error_msg) 1522 # Instantiate the dataset builder -> 1523 builder_instance: DatasetBuilder = builder_cls( 1524 cache_dir=cache_dir, 1525 config_name=config_name, 1526 data_dir=data_dir, 1527 data_files=data_files, 1528 hash=hash, 1529 features=features, 1530 use_auth_token=use_auth_token, 1531 **builder_kwargs, 1532 **config_kwargs, 1533 ) 1535 return builder_instance File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1291 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1292 super().__init__(*args, **kwargs) 1293 # Batch size used by the ArrowWriter 1294 # It defines the number of samples that are kept in memory before writing them 1295 # and also the length of the arrow chunks 1296 # None means that the ArrowWriter will use its default value 1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 309 # prepare info: DatasetInfo are a standardized dataclass across all datasets 310 # Prefill datasetinfo 311 if info is None: --> 312 info = self.get_exported_dataset_info() 313 info.update(self._info()) 314 info.builder_name = self.name File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self) 400 def get_exported_dataset_info(self) -> DatasetInfo: 401 """Empty DatasetInfo if doesn't exist 402 403 Example: (...) 410 ``` 411 """ --> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls) 385 @classmethod 386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict: 387 """Empty dict if doesn't exist 388 389 Example: (...) 396 ``` 397 """ --> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir) 368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md") 369 if "dataset_info" in dataset_metadata: --> 370 return cls.from_metadata(dataset_metadata) 371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)): 372 # this is just to have backward compatibility with dataset_infos.json files 373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata) 387 return cls( 388 { 389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict( (...) 393 } 394 ) 395 else: --> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"]) 397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default") 398 return cls({dataset_info.config_name: dataset_info}) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data) 330 yaml_data = copy.deepcopy(yaml_data) 331 if yaml_data.get("features") is not None: --> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) 333 if yaml_data.get("splits") is not None: 334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"]) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data) 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." 1709 ) 1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids] TypeError: can only concatenate str (not "int") to str ``` The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue? ### Steps to reproduce the bug Run the following code snippet in a python script/ notebook cell: ``` from datasets import load_dataset yelp_dataset = load_dataset("yelp_review_full") yelp_dataset ``` ### Expected behavior The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5670/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5669/comments
https://api.github.com/repos/huggingface/datasets/issues/5669/events
https://github.com/huggingface/datasets/issues/5669
1,638,070,046
I_kwDODunzps5hovce
5,669
Almost identical datasets, huge performance difference
{ "login": "eli-osherovich", "id": 2437102, "node_id": "MDQ6VXNlcjI0MzcxMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eli-osherovich", "html_url": "https://github.com/eli-osherovich", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Do I miss something here?", "Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `arrow bytes -> NumPy -> decoding as PIL.Image -> PyTorch` is faster than going from `arrow list -> NumPy -> PyTorch`. \r\n\r\nTo store image bytes in the second example, you can do the following:\r\n\r\n```python\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"])\r\n return example\r\n\r\nfeatures = dataset.features.copy()\r\ndel features[\"image\"]\r\nfeatures[\"image2\"] = datasets.Image()\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"], features=features)\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n```", "Thanks, @mariosasko. I could not understand why a (decoded) sequence should be MUCH slower than an encoded image (that must be decoded every time). At any rate, I tried you suggestion. It made the `map` step to run extremely slow (consumes all the 16GB of memory and starts swapping)\r\n\r\nI tried also the easiest (as I see it) scenario, where images are kept as bytes, but it made things even worse: not only it was extremely slow, but also crashes\r\n\r\n```python\r\n\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"]).tobytes()\r\n return example\r\n\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"])\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n\r\n\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nOutput exceeds the size limit. Open the full output data in a text editor\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nFile ~/virtenvs/py310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1133, in _MultiProcessingDataLoaderIter._try_get_data(self, timeout)\r\n 1132 try:\r\n-> 1133 data = self._data_queue.get(timeout=timeout)\r\n 1134 return (True, data)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/queues.py:113, in Queue.get(self, block, timeout)\r\n 112 timeout = deadline - time.monotonic()\r\n--> 113 if not self._poll(timeout):\r\n 114 raise Empty\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:257, in _ConnectionBase.poll(self, timeout)\r\n 256 self._check_readable()\r\n--> 257 return self._poll(timeout)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:424, in Connection._poll(self, timeout)\r\n 423 def _poll(self, timeout):\r\n--> 424 r = wait([self], timeout)\r\n 425 return bool(r)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:931, in wait(object_list, timeout)\r\n 930 while True:\r\n--> 931 ready = selector.select(timeout)\r\n 932 if ready:\r\n...\r\n-> 1146 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e\r\n 1147 if isinstance(e, queue.Empty):\r\n 1148 return (False, None)\r\n\r\nRuntimeError: DataLoader worker (pid(s) 195393) exited unexpectedly\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\n```\r\n", "Correction: the `beans` dataset stores the image file paths, not the bytes.\r\n\r\nFor your use case, I think it makes more sense to use `with_tranform` than `map` and lazily decode images with `cv2.imread` when indexing an example/batch:\r\n```python\r\nimport cv2\r\n\r\ndef transform(batch):\r\n batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\r\n return batch\r\n\r\ndataset = dataset.with_transform(transform)\r\n```\r\n", "This is incorrect.\n\nDid you try to run it? dataset[0] returns a tensor of numbers. dataset2[0]\nreturns the same tensor, but after a few long seconds. Looping over a\nthousand of images cannot take 15 minutes.\n\nOn Fri, 24 Mar 2023 at 19:28 Mario Ε aΕ‘ko ***@***.***> wrote:\n\n> Correction: the beans dataset stores the image file paths, not the bytes.\n>\n> For your use case, I think it makes more sense to use with_tranform than\n> map and lazily decode images with cv2.imread when accessing an\n> example/batch:\n>\n> import cv2\n> def transform(batch):\n> batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\n> return batch\n> dataset = dataset.with_transform(transform)\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5669#issuecomment-1483084347>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73SHRWXIQX6SCYCJ7ITW5XDUDANCNFSM6AAAAAAWFSHWEM>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "I updated the transform with the NumPy -> PyTorch conversion.\r\n\r\nI'm sharing the entire code:\r\n```python\r\nimport cv2\r\nimport numpy as np\r\nimport datasets\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\ndataset = load_dataset(\"beans\", split=\"train\")\r\n\r\ndef transform(batch):\r\n # # Pillow decodes as RGB\r\n # batch[\"image\"] = torch.stack([torch.from_numpy(cv2.cvtColor(cv2.imread(image_file_path), cv2.COLOR_BGR2RGB)) for image_file_path in batch[\"image_file_path\"]])\r\n batch[\"image\"] = torch.stack([torch.from_numpy(cv2.imread(image_file_path)) for image_file_path in batch[\"image_file_path\"]])\r\n batch[\"labels\"] = torch.tensor(batch[\"labels\"])\r\n return batch\r\n\r\ndataset2 = dataset.cast_column(\"image\", datasets.Image(decode=False)).with_transform(transform)\r\n\r\nfor x in DataLoader(dataset2, batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n```\r\n\r\nThis code is β‰ˆ 10% faster on my machine than the default decoding with Pillow and `.with_format(\"torch\")`.", "Thanks, @mariosasko \r\nMy question remain unanswered though. Why is the `map`ed dataset so slow? My understanding is that a dataset of numpy arrays should be must faster than a dataset that has to decode images into numpy arrays every time one accesses an item. " ]
1,679,595,620,000
1,681,066,583,000
null
CONTRIBUTOR
null
### Describe the bug I am struggling to understand (huge) performance difference between two datasets that are almost identical. ### Steps to reproduce the bug # Fast (normal) dataset speed: ```python import cv2 from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset("beans", split="train") for x in DataLoader(dataset.with_format("torch"), batch_size=16, shuffle=True, num_workers=8): pass ``` The above pass over the dataset takes about 1.5 seconds on my computer. However, if I re-create (almost) the same dataset, the sweep takes HUGE amount of time: 15 minutes. Steps to reproduce: ```python def transform(example): example["image2"] = cv2.imread(example["image_file_path"]) return example dataset2 = dataset.map(transform, remove_columns=["image"]) for x in DataLoader(dataset2.with_format("torch"), batch_size=16, shuffle=True, num_workers=8): pass ``` ### Expected behavior Same timings ### Environment info python==3.10.9 datasets==2.10.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5669/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5668/comments
https://api.github.com/repos/huggingface/datasets/issues/5668/events
https://github.com/huggingface/datasets/pull/5668
1,638,018,598
PR_kwDODunzps5MwuIp
5,668
Support for downloading only provided split
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5668). All of your documentation changes will be reflected on that endpoint.", "My previous comment didn't create the retro-link in the PR. I write it here again.\r\n\r\nYou can check the context and the discussions we had about this feature enhancement in this PR:\r\n- #2249" ]
1,679,594,019,000
1,679,640,194,000
null
CONTRIBUTOR
null
We can pass split to `_split_generators()`. But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5668/timeline
null
null
1
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5668", "html_url": "https://github.com/huggingface/datasets/pull/5668", "diff_url": "https://github.com/huggingface/datasets/pull/5668.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5668.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5667/comments
https://api.github.com/repos/huggingface/datasets/issues/5667/events
https://github.com/huggingface/datasets/pull/5667
1,637,789,361
PR_kwDODunzps5Mv8Im
5,667
Jax requires jaxlib
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008592 / 0.011353 (-0.002761) | 0.005182 / 0.011008 (-0.005826) | 0.097916 / 0.038508 (0.059408) | 0.034612 / 0.023109 (0.011503) | 0.313760 / 0.275898 (0.037862) | 0.353422 / 0.323480 (0.029942) | 0.005880 / 0.007986 (-0.002106) | 0.004123 / 0.004328 (-0.000205) | 0.073634 / 0.004250 (0.069384) | 0.049349 / 0.037052 (0.012297) | 0.317381 / 0.258489 (0.058892) | 0.365821 / 0.293841 (0.071980) | 0.036482 / 0.128546 (-0.092065) | 0.012126 / 0.075646 (-0.063521) | 0.334640 / 0.419271 (-0.084631) | 0.050551 / 0.043533 (0.007018) | 0.310472 / 0.255139 (0.055333) | 0.349049 / 0.283200 (0.065850) | 0.101343 / 0.141683 (-0.040340) | 1.447903 / 1.452155 (-0.004252) | 1.518793 / 1.492716 (0.026077) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210971 / 0.018006 (0.192965) | 0.449471 / 0.000490 (0.448982) | 0.003596 / 0.000200 (0.003396) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027386 / 0.037411 (-0.010025) | 0.112683 / 0.014526 (0.098157) | 0.117603 / 0.176557 (-0.058954) | 0.174186 / 0.737135 (-0.562949) | 0.123510 / 0.296338 (-0.172829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422595 / 0.215209 (0.207386) | 4.224713 / 2.077655 (2.147058) | 2.006359 / 1.504120 (0.502240) | 1.823767 / 1.541195 (0.282572) | 1.898340 / 1.468490 (0.429849) | 0.721656 / 4.584777 (-3.863121) | 3.823498 / 3.745712 (0.077785) | 2.172380 / 5.269862 (-3.097481) | 1.469773 / 4.565676 (-3.095904) | 0.086978 / 0.424275 (-0.337297) | 0.012642 / 0.007607 (0.005035) | 0.517830 / 0.226044 (0.291785) | 5.171150 / 2.268929 (2.902221) | 2.495238 / 55.444624 (-52.949386) | 2.114380 / 6.876477 (-4.762097) | 2.274329 / 2.142072 (0.132257) | 0.863855 / 4.805227 (-3.941372) | 0.174127 / 6.500664 (-6.326537) | 0.065939 / 0.075469 (-0.009530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208831 / 1.841788 (-0.632957) | 15.016704 / 8.074308 (6.942396) | 14.721231 / 10.191392 (4.529839) | 0.144140 / 0.680424 (-0.536284) | 0.017781 / 0.534201 (-0.516420) | 0.425679 / 0.579283 (-0.153604) | 0.416747 / 0.434364 (-0.017617) | 0.490160 / 0.540337 (-0.050177) | 0.583639 / 1.386936 (-0.803297) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007670 / 0.011353 (-0.003683) | 0.005383 / 0.011008 (-0.005626) | 0.075756 / 0.038508 (0.037248) | 0.033373 / 0.023109 (0.010263) | 0.341017 / 0.275898 (0.065119) | 0.378890 / 0.323480 (0.055410) | 0.005945 / 0.007986 (-0.002040) | 0.004179 / 0.004328 (-0.000150) | 0.074588 / 0.004250 (0.070337) | 0.048564 / 0.037052 (0.011511) | 0.338774 / 0.258489 (0.080285) | 0.391081 / 0.293841 (0.097240) | 0.036659 / 0.128546 (-0.091887) | 0.012241 / 0.075646 (-0.063406) | 0.086910 / 0.419271 (-0.332361) | 0.049745 / 0.043533 (0.006212) | 0.332810 / 0.255139 (0.077671) | 0.360317 / 0.283200 (0.077117) | 0.103399 / 0.141683 (-0.038283) | 1.456754 / 1.452155 (0.004599) | 1.542644 / 1.492716 (0.049928) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207182 / 0.018006 (0.189176) | 0.455659 / 0.000490 (0.455169) | 0.003609 / 0.000200 (0.003409) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029556 / 0.037411 (-0.007856) | 0.114215 / 0.014526 (0.099690) | 0.127721 / 0.176557 (-0.048836) | 0.177070 / 0.737135 (-0.560065) | 0.128840 / 0.296338 (-0.167499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428176 / 0.215209 (0.212967) | 4.274324 / 2.077655 (2.196669) | 2.020058 / 1.504120 (0.515938) | 1.823343 / 1.541195 (0.282148) | 1.924688 / 1.468490 (0.456198) | 0.719195 / 4.584777 (-3.865582) | 3.760445 / 3.745712 (0.014733) | 2.133813 / 5.269862 (-3.136049) | 1.364876 / 4.565676 (-3.200801) | 0.087523 / 0.424275 (-0.336752) | 0.013712 / 0.007607 (0.006105) | 0.528403 / 0.226044 (0.302359) | 5.307780 / 2.268929 (3.038851) | 2.496747 / 55.444624 (-52.947877) | 2.169136 / 6.876477 (-4.707341) | 2.235719 / 2.142072 (0.093646) | 0.875281 / 4.805227 (-3.929946) | 0.172369 / 6.500664 (-6.328295) | 0.064667 / 0.075469 (-0.010802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262594 / 1.841788 (-0.579193) | 15.182681 / 8.074308 (7.108373) | 14.725663 / 10.191392 (4.534271) | 0.180961 / 0.680424 (-0.499462) | 0.017632 / 0.534201 (-0.516569) | 0.427531 / 0.579283 (-0.151752) | 0.431741 / 0.434364 (-0.002622) | 0.503251 / 0.540337 (-0.037087) | 0.597423 / 1.386936 (-0.789513) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f4cf224dcb1043a272971ed331a214cf65c504be \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009761 / 0.011353 (-0.001592) | 0.006779 / 0.011008 (-0.004229) | 0.132786 / 0.038508 (0.094277) | 0.037721 / 0.023109 (0.014611) | 0.435685 / 0.275898 (0.159787) | 0.447488 / 0.323480 (0.124009) | 0.006848 / 0.007986 (-0.001137) | 0.005099 / 0.004328 (0.000771) | 0.097384 / 0.004250 (0.093133) | 0.056663 / 0.037052 (0.019610) | 0.463407 / 0.258489 (0.204918) | 0.502544 / 0.293841 (0.208703) | 0.053817 / 0.128546 (-0.074729) | 0.020253 / 0.075646 (-0.055393) | 0.446653 / 0.419271 (0.027382) | 0.064465 / 0.043533 (0.020932) | 0.455375 / 0.255139 (0.200236) | 0.458378 / 0.283200 (0.175178) | 0.109124 / 0.141683 (-0.032559) | 1.957338 / 1.452155 (0.505184) | 1.960391 / 1.492716 (0.467674) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219566 / 0.018006 (0.201560) | 0.558181 / 0.000490 (0.557691) | 0.004678 / 0.000200 (0.004478) | 0.000125 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032643 / 0.037411 (-0.004768) | 0.147375 / 0.014526 (0.132849) | 0.130821 / 0.176557 (-0.045736) | 0.203202 / 0.737135 (-0.533933) | 0.145186 / 0.296338 (-0.151153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665773 / 0.215209 (0.450564) | 6.674021 / 2.077655 (4.596366) | 2.662372 / 1.504120 (1.158253) | 2.333327 / 1.541195 (0.792132) | 2.221413 / 1.468490 (0.752923) | 1.287001 / 4.584777 (-3.297776) | 5.534326 / 3.745712 (1.788614) | 3.188809 / 5.269862 (-2.081052) | 2.261717 / 4.565676 (-2.303960) | 0.151910 / 0.424275 (-0.272366) | 0.020509 / 0.007607 (0.012902) | 0.863608 / 0.226044 (0.637564) | 8.442155 / 2.268929 (6.173227) | 3.438260 / 55.444624 (-52.006364) | 2.692503 / 6.876477 (-4.183974) | 2.810997 / 2.142072 (0.668925) | 1.477345 / 4.805227 (-3.327882) | 0.261942 / 6.500664 (-6.238722) | 0.086347 / 0.075469 (0.010878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.529072 / 1.841788 (-0.312716) | 17.213019 / 8.074308 (9.138711) | 21.887309 / 10.191392 (11.695917) | 0.259660 / 0.680424 (-0.420763) | 0.027916 / 0.534201 (-0.506285) | 0.554103 / 0.579283 (-0.025180) | 0.614566 / 0.434364 (0.180202) | 0.700456 / 0.540337 (0.160119) | 0.756860 / 1.386936 (-0.630077) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009267 / 0.011353 (-0.002086) | 0.006414 / 0.011008 (-0.004594) | 0.102404 / 0.038508 (0.063896) | 0.034885 / 0.023109 (0.011776) | 0.413191 / 0.275898 (0.137293) | 0.483901 / 0.323480 (0.160422) | 0.006614 / 0.007986 (-0.001372) | 0.004608 / 0.004328 (0.000280) | 0.096717 / 0.004250 (0.092467) | 0.055123 / 0.037052 (0.018071) | 0.417786 / 0.258489 (0.159297) | 0.490886 / 0.293841 (0.197045) | 0.056951 / 0.128546 (-0.071595) | 0.021073 / 0.075646 (-0.054574) | 0.116576 / 0.419271 (-0.302695) | 0.063968 / 0.043533 (0.020435) | 0.420495 / 0.255139 (0.165356) | 0.449667 / 0.283200 (0.166467) | 0.115318 / 0.141683 (-0.026365) | 1.899398 / 1.452155 (0.447243) | 1.992175 / 1.492716 (0.499459) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233076 / 0.018006 (0.215070) | 0.518377 / 0.000490 (0.517887) | 0.000809 / 0.000200 (0.000609) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030951 / 0.037411 (-0.006460) | 0.134940 / 0.014526 (0.120414) | 0.147789 / 0.176557 (-0.028767) | 0.205854 / 0.737135 (-0.531281) | 0.146726 / 0.296338 (-0.149613) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648006 / 0.215209 (0.432797) | 6.416688 / 2.077655 (4.339033) | 2.696462 / 1.504120 (1.192342) | 2.293071 / 1.541195 (0.751877) | 2.319426 / 1.468490 (0.850935) | 1.332398 / 4.584777 (-3.252379) | 5.706956 / 3.745712 (1.961244) | 4.464473 / 5.269862 (-0.805388) | 2.817364 / 4.565676 (-1.748312) | 0.157595 / 0.424275 (-0.266680) | 0.015721 / 0.007607 (0.008114) | 0.806055 / 0.226044 (0.580010) | 7.927795 / 2.268929 (5.658866) | 3.461251 / 55.444624 (-51.983373) | 2.664466 / 6.876477 (-4.212010) | 2.660041 / 2.142072 (0.517968) | 1.531135 / 4.805227 (-3.274092) | 0.260293 / 6.500664 (-6.240371) | 0.077440 / 0.075469 (0.001971) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.687325 / 1.841788 (-0.154463) | 17.905080 / 8.074308 (9.830772) | 21.046794 / 10.191392 (10.855402) | 0.245335 / 0.680424 (-0.435089) | 0.026830 / 0.534201 (-0.507371) | 0.510798 / 0.579283 (-0.068485) | 0.590041 / 0.434364 (0.155677) | 0.607440 / 0.540337 (0.067102) | 0.725030 / 1.386936 (-0.661906) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#91dcb3636e410a249177f5e0508ed101ad7ee25b \"CML watermark\")\n", "I self-assigned #5666 and I was working on it... without success: https://github.com/huggingface/datasets/tree/fix-5666\r\n\r\nI think your approach is the right one because installation of jax is not trivial...\r\n\r\nNext time it would be better that you self-assign an issue before working on it, so that we avoid duplicate work... :sweat_smile: ", "Oh sorry I forgot to self assign this time", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008436 / 0.011353 (-0.002917) | 0.005702 / 0.011008 (-0.005306) | 0.113518 / 0.038508 (0.075010) | 0.039639 / 0.023109 (0.016530) | 0.353200 / 0.275898 (0.077302) | 0.382428 / 0.323480 (0.058948) | 0.007419 / 0.007986 (-0.000566) | 0.005640 / 0.004328 (0.001311) | 0.083905 / 0.004250 (0.079655) | 0.053258 / 0.037052 (0.016205) | 0.371069 / 0.258489 (0.112580) | 0.390439 / 0.293841 (0.096598) | 0.042679 / 0.128546 (-0.085867) | 0.013438 / 0.075646 (-0.062208) | 0.390116 / 0.419271 (-0.029155) | 0.068782 / 0.043533 (0.025249) | 0.352620 / 0.255139 (0.097481) | 0.371939 / 0.283200 (0.088739) | 0.126157 / 0.141683 (-0.015525) | 1.694638 / 1.452155 (0.242484) | 1.799211 / 1.492716 (0.306495) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260099 / 0.018006 (0.242092) | 0.489852 / 0.000490 (0.489362) | 0.012549 / 0.000200 (0.012349) | 0.000275 / 0.000054 (0.000221) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032235 / 0.037411 (-0.005177) | 0.125325 / 0.014526 (0.110799) | 0.137242 / 0.176557 (-0.039315) | 0.206566 / 0.737135 (-0.530570) | 0.143260 / 0.296338 (-0.153078) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478510 / 0.215209 (0.263301) | 4.746439 / 2.077655 (2.668784) | 2.195072 / 1.504120 (0.690952) | 1.958163 / 1.541195 (0.416969) | 2.028566 / 1.468490 (0.560075) | 0.821289 / 4.584777 (-3.763488) | 4.765529 / 3.745712 (1.019817) | 2.378753 / 5.269862 (-2.891108) | 1.514776 / 4.565676 (-3.050900) | 0.100673 / 0.424275 (-0.323602) | 0.014720 / 0.007607 (0.007113) | 0.606388 / 0.226044 (0.380343) | 5.975285 / 2.268929 (3.706357) | 2.866762 / 55.444624 (-52.577862) | 2.392132 / 6.876477 (-4.484345) | 2.546487 / 2.142072 (0.404415) | 0.982394 / 4.805227 (-3.822833) | 0.201195 / 6.500664 (-6.299469) | 0.077781 / 0.075469 (0.002312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.420613 / 1.841788 (-0.421174) | 17.743030 / 8.074308 (9.668722) | 16.752344 / 10.191392 (6.560951) | 0.167464 / 0.680424 (-0.512960) | 0.020908 / 0.534201 (-0.513293) | 0.502919 / 0.579283 (-0.076364) | 0.506375 / 0.434364 (0.072011) | 0.602695 / 0.540337 (0.062358) | 0.689398 / 1.386936 (-0.697538) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008713 / 0.011353 (-0.002640) | 0.006152 / 0.011008 (-0.004856) | 0.091264 / 0.038508 (0.052756) | 0.040284 / 0.023109 (0.017174) | 0.417598 / 0.275898 (0.141700) | 0.460141 / 0.323480 (0.136661) | 0.006589 / 0.007986 (-0.001397) | 0.004671 / 0.004328 (0.000343) | 0.089360 / 0.004250 (0.085110) | 0.055113 / 0.037052 (0.018061) | 0.415241 / 0.258489 (0.156752) | 0.470566 / 0.293841 (0.176725) | 0.042963 / 0.128546 (-0.085584) | 0.014421 / 0.075646 (-0.061225) | 0.106333 / 0.419271 (-0.312939) | 0.057810 / 0.043533 (0.014277) | 0.417889 / 0.255139 (0.162750) | 0.444236 / 0.283200 (0.161036) | 0.119508 / 0.141683 (-0.022175) | 1.736209 / 1.452155 (0.284055) | 1.790319 / 1.492716 (0.297602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219184 / 0.018006 (0.201178) | 0.493931 / 0.000490 (0.493441) | 0.006727 / 0.000200 (0.006527) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034415 / 0.037411 (-0.002996) | 0.132165 / 0.014526 (0.117639) | 0.143138 / 0.176557 (-0.033418) | 0.200052 / 0.737135 (-0.537083) | 0.148906 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483686 / 0.215209 (0.268476) | 4.849874 / 2.077655 (2.772220) | 2.374276 / 1.504120 (0.870156) | 2.168334 / 1.541195 (0.627139) | 2.285983 / 1.468490 (0.817493) | 0.833041 / 4.584777 (-3.751735) | 4.665915 / 3.745712 (0.920203) | 4.543559 / 5.269862 (-0.726302) | 2.246926 / 4.565676 (-2.318750) | 0.098490 / 0.424275 (-0.325785) | 0.014934 / 0.007607 (0.007327) | 0.591878 / 0.226044 (0.365834) | 6.039852 / 2.268929 (3.770923) | 2.881244 / 55.444624 (-52.563381) | 2.486297 / 6.876477 (-4.390179) | 2.564642 / 2.142072 (0.422569) | 0.985684 / 4.805227 (-3.819543) | 0.199101 / 6.500664 (-6.301563) | 0.078138 / 0.075469 (0.002669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.647744 / 1.841788 (-0.194043) | 18.986464 / 8.074308 (10.912156) | 17.246575 / 10.191392 (7.055183) | 0.219151 / 0.680424 (-0.461273) | 0.022219 / 0.534201 (-0.511982) | 0.547207 / 0.579283 (-0.032076) | 0.525943 / 0.434364 (0.091579) | 0.616909 / 0.540337 (0.076572) | 0.757423 / 1.386936 (-0.629513) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f423b69cd4371bd03bb819c60450534f8850ad61 \"CML watermark\")\n" ]
1,679,586,069,000
1,679,588,591,000
1,679,588,092,000
MEMBER
null
close https://github.com/huggingface/datasets/issues/5666
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5667/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5667", "html_url": "https://github.com/huggingface/datasets/pull/5667", "diff_url": "https://github.com/huggingface/datasets/pull/5667.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5667.patch", "merged_at": "2023-03-23T16:14:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/5666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5666/comments
https://api.github.com/repos/huggingface/datasets/issues/5666/events
https://github.com/huggingface/datasets/issues/5666
1,637,675,062
I_kwDODunzps5hnPA2
5,666
Support tensorflow 2.12.0 in CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[]
1,679,582,271,000
1,679,588,094,000
1,679,588,094,000
MEMBER
null
Once we find out the root cause of: - #5663 we should revert the temporary pin on tensorflow introduced by: - #5664
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5666/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5665/comments
https://api.github.com/repos/huggingface/datasets/issues/5665/events
https://github.com/huggingface/datasets/issues/5665
1,637,193,648
I_kwDODunzps5hlZew
5,665
Feature request: IterableDataset.push_to_hub
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,679,565,184,000
1,679,565,196,000
null
CONTRIBUTOR
null
### Feature request It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`. Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming: ``` from datasets import load_dataset dataset = load_dataset("laion/laion400m", streaming=True, split="train") ``` Then you could filter the dataset based on certain conditions: ``` filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400) ``` In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push: ``` from datasets import Dataset Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...) ``` It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size: ``` filtered_dataset.push_to_hub("my-filtered-dataset") ``` ### Motivation This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk. ### Your contribution Happy to test out a PR :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5665/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5665/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5664/comments
https://api.github.com/repos/huggingface/datasets/issues/5664/events
https://github.com/huggingface/datasets/pull/5664
1,637,192,684
PR_kwDODunzps5Mt6vp
5,664
Fix CI by temporarily pinning tensorflow < 2.12.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007500 / 0.011353 (-0.003853) | 0.005279 / 0.011008 (-0.005729) | 0.098848 / 0.038508 (0.060340) | 0.035290 / 0.023109 (0.012181) | 0.342676 / 0.275898 (0.066778) | 0.375310 / 0.323480 (0.051830) | 0.006037 / 0.007986 (-0.001948) | 0.004143 / 0.004328 (-0.000185) | 0.075757 / 0.004250 (0.071506) | 0.049436 / 0.037052 (0.012383) | 0.344734 / 0.258489 (0.086245) | 0.388111 / 0.293841 (0.094270) | 0.037079 / 0.128546 (-0.091467) | 0.011986 / 0.075646 (-0.063660) | 0.333911 / 0.419271 (-0.085361) | 0.050415 / 0.043533 (0.006882) | 0.341723 / 0.255139 (0.086584) | 0.364136 / 0.283200 (0.080936) | 0.099371 / 0.141683 (-0.042312) | 1.467030 / 1.452155 (0.014876) | 1.565472 / 1.492716 (0.072755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212534 / 0.018006 (0.194528) | 0.435854 / 0.000490 (0.435364) | 0.000419 / 0.000200 (0.000219) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027957 / 0.037411 (-0.009454) | 0.106835 / 0.014526 (0.092309) | 0.115733 / 0.176557 (-0.060824) | 0.172374 / 0.737135 (-0.564761) | 0.121907 / 0.296338 (-0.174431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413195 / 0.215209 (0.197986) | 4.144775 / 2.077655 (2.067120) | 1.885647 / 1.504120 (0.381527) | 1.645525 / 1.541195 (0.104331) | 1.690117 / 1.468490 (0.221627) | 0.705787 / 4.584777 (-3.878989) | 3.763338 / 3.745712 (0.017626) | 2.163044 / 5.269862 (-3.106818) | 1.478619 / 4.565676 (-3.087057) | 0.086458 / 0.424275 (-0.337817) | 0.012711 / 0.007607 (0.005103) | 0.503592 / 0.226044 (0.277547) | 5.031176 / 2.268929 (2.762248) | 2.345348 / 55.444624 (-53.099276) | 2.064573 / 6.876477 (-4.811903) | 2.203937 / 2.142072 (0.061865) | 0.838761 / 4.805227 (-3.966466) | 0.170116 / 6.500664 (-6.330548) | 0.064012 / 0.075469 (-0.011457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190887 / 1.841788 (-0.650901) | 15.091466 / 8.074308 (7.017158) | 14.549112 / 10.191392 (4.357720) | 0.180603 / 0.680424 (-0.499820) | 0.017387 / 0.534201 (-0.516814) | 0.421372 / 0.579283 (-0.157911) | 0.434644 / 0.434364 (0.000281) | 0.496958 / 0.540337 (-0.043380) | 0.593995 / 1.386936 (-0.792941) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007790 / 0.011353 (-0.003563) | 0.005307 / 0.011008 (-0.005701) | 0.074779 / 0.038508 (0.036271) | 0.034442 / 0.023109 (0.011332) | 0.337973 / 0.275898 (0.062075) | 0.371944 / 0.323480 (0.048464) | 0.006088 / 0.007986 (-0.001897) | 0.005619 / 0.004328 (0.001291) | 0.073757 / 0.004250 (0.069507) | 0.049385 / 0.037052 (0.012333) | 0.338326 / 0.258489 (0.079837) | 0.387916 / 0.293841 (0.094075) | 0.037197 / 0.128546 (-0.091350) | 0.012371 / 0.075646 (-0.063275) | 0.086938 / 0.419271 (-0.332334) | 0.051379 / 0.043533 (0.007846) | 0.331580 / 0.255139 (0.076441) | 0.355765 / 0.283200 (0.072565) | 0.103368 / 0.141683 (-0.038315) | 1.475963 / 1.452155 (0.023808) | 1.530579 / 1.492716 (0.037863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223037 / 0.018006 (0.205031) | 0.441795 / 0.000490 (0.441305) | 0.003937 / 0.000200 (0.003737) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030081 / 0.037411 (-0.007330) | 0.110366 / 0.014526 (0.095841) | 0.124097 / 0.176557 (-0.052459) | 0.176237 / 0.737135 (-0.560898) | 0.127045 / 0.296338 (-0.169293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420191 / 0.215209 (0.204982) | 4.186721 / 2.077655 (2.109066) | 1.992336 / 1.504120 (0.488216) | 1.800567 / 1.541195 (0.259373) | 1.917982 / 1.468490 (0.449491) | 0.700932 / 4.584777 (-3.883845) | 3.888631 / 3.745712 (0.142918) | 2.138168 / 5.269862 (-3.131693) | 1.364636 / 4.565676 (-3.201041) | 0.085404 / 0.424275 (-0.338871) | 0.012550 / 0.007607 (0.004943) | 0.526110 / 0.226044 (0.300066) | 5.258717 / 2.268929 (2.989789) | 2.454287 / 55.444624 (-52.990338) | 2.130539 / 6.876477 (-4.745937) | 2.207982 / 2.142072 (0.065909) | 0.839242 / 4.805227 (-3.965985) | 0.167611 / 6.500664 (-6.333053) | 0.065706 / 0.075469 (-0.009763) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266125 / 1.841788 (-0.575662) | 15.480513 / 8.074308 (7.406205) | 14.959376 / 10.191392 (4.767983) | 0.149195 / 0.680424 (-0.531229) | 0.017881 / 0.534201 (-0.516320) | 0.430863 / 0.579283 (-0.148420) | 0.432878 / 0.434364 (-0.001485) | 0.499605 / 0.540337 (-0.040733) | 0.605592 / 1.386936 (-0.781344) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c20230f8d8762fb67523677093e95e773ce88786 \"CML watermark\")\n" ]
1,679,565,146,000
1,679,566,631,000
1,679,566,194,000
MEMBER
null
As a hotfix for our CI, temporarily pin `tensorflow` upper version: - In Python 3.10, tensorflow-2.12.0 also installs `jax` Fix #5663 Until root cause is fixed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5664/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5664", "html_url": "https://github.com/huggingface/datasets/pull/5664", "diff_url": "https://github.com/huggingface/datasets/pull/5664.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5664.patch", "merged_at": "2023-03-23T10:09:53" }
true
https://api.github.com/repos/huggingface/datasets/issues/5663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5663/comments
https://api.github.com/repos/huggingface/datasets/issues/5663/events
https://github.com/huggingface/datasets/issues/5663
1,637,173,248
I_kwDODunzps5hlUgA
5,663
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[]
1,679,564,383,000
1,679,566,195,000
1,679,566,195,000
MEMBER
null
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662 ``` FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. ===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ====== ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5663/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5662/comments
https://api.github.com/repos/huggingface/datasets/issues/5662/events
https://github.com/huggingface/datasets/pull/5662
1,637,140,813
PR_kwDODunzps5MtvsM
5,662
Fix unnecessary dict comprehension
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I am merging because the CI error is unrelated.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009448 / 0.011353 (-0.001905) | 0.006156 / 0.011008 (-0.004852) | 0.123656 / 0.038508 (0.085147) | 0.034998 / 0.023109 (0.011889) | 0.374722 / 0.275898 (0.098824) | 0.418912 / 0.323480 (0.095432) | 0.007348 / 0.007986 (-0.000637) | 0.004779 / 0.004328 (0.000450) | 0.097541 / 0.004250 (0.093291) | 0.052523 / 0.037052 (0.015471) | 0.380118 / 0.258489 (0.121628) | 0.429448 / 0.293841 (0.135607) | 0.055156 / 0.128546 (-0.073390) | 0.019884 / 0.075646 (-0.055763) | 0.429613 / 0.419271 (0.010341) | 0.067554 / 0.043533 (0.024021) | 0.373940 / 0.255139 (0.118801) | 0.408115 / 0.283200 (0.124916) | 0.111353 / 0.141683 (-0.030329) | 1.821013 / 1.452155 (0.368858) | 1.972882 / 1.492716 (0.480165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236686 / 0.018006 (0.218679) | 0.516519 / 0.000490 (0.516029) | 0.009582 / 0.000200 (0.009383) | 0.000404 / 0.000054 (0.000349) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029425 / 0.037411 (-0.007986) | 0.123972 / 0.014526 (0.109446) | 0.133768 / 0.176557 (-0.042789) | 0.207562 / 0.737135 (-0.529573) | 0.142841 / 0.296338 (-0.153497) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618531 / 0.215209 (0.403322) | 6.216854 / 2.077655 (4.139199) | 2.480138 / 1.504120 (0.976018) | 2.139884 / 1.541195 (0.598689) | 2.122992 / 1.468490 (0.654502) | 1.233824 / 4.584777 (-3.350953) | 5.426142 / 3.745712 (1.680430) | 4.891039 / 5.269862 (-0.378822) | 2.767033 / 4.565676 (-1.798643) | 0.142224 / 0.424275 (-0.282051) | 0.015754 / 0.007607 (0.008147) | 0.772210 / 0.226044 (0.546166) | 7.620484 / 2.268929 (5.351556) | 3.141617 / 55.444624 (-52.303007) | 2.471406 / 6.876477 (-4.405070) | 2.648008 / 2.142072 (0.505935) | 1.429281 / 4.805227 (-3.375946) | 0.255981 / 6.500664 (-6.244683) | 0.077710 / 0.075469 (0.002241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547714 / 1.841788 (-0.294073) | 17.859985 / 8.074308 (9.785677) | 21.791878 / 10.191392 (11.600486) | 0.238569 / 0.680424 (-0.441854) | 0.027520 / 0.534201 (-0.506681) | 0.553960 / 0.579283 (-0.025324) | 0.616165 / 0.434364 (0.181801) | 0.622492 / 0.540337 (0.082154) | 0.716345 / 1.386936 (-0.670591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009624 / 0.011353 (-0.001729) | 0.006091 / 0.011008 (-0.004917) | 0.096623 / 0.038508 (0.058115) | 0.034903 / 0.023109 (0.011793) | 0.421009 / 0.275898 (0.145111) | 0.459236 / 0.323480 (0.135756) | 0.007778 / 0.007986 (-0.000207) | 0.004726 / 0.004328 (0.000398) | 0.099603 / 0.004250 (0.095353) | 0.051426 / 0.037052 (0.014373) | 0.420461 / 0.258489 (0.161972) | 0.469747 / 0.293841 (0.175906) | 0.053769 / 0.128546 (-0.074777) | 0.020636 / 0.075646 (-0.055011) | 0.115785 / 0.419271 (-0.303486) | 0.062692 / 0.043533 (0.019160) | 0.419388 / 0.255139 (0.164249) | 0.448675 / 0.283200 (0.165475) | 0.112099 / 0.141683 (-0.029584) | 1.787982 / 1.452155 (0.335827) | 1.884581 / 1.492716 (0.391864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208837 / 0.018006 (0.190831) | 0.515593 / 0.000490 (0.515103) | 0.000447 / 0.000200 (0.000247) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031025 / 0.037411 (-0.006386) | 0.125179 / 0.014526 (0.110653) | 0.137050 / 0.176557 (-0.039506) | 0.203582 / 0.737135 (-0.533553) | 0.139209 / 0.296338 (-0.157130) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.601507 / 0.215209 (0.386298) | 6.034778 / 2.077655 (3.957123) | 2.550277 / 1.504120 (1.046157) | 2.242277 / 1.541195 (0.701082) | 2.306378 / 1.468490 (0.837888) | 1.251219 / 4.584777 (-3.333558) | 5.448698 / 3.745712 (1.702986) | 3.044666 / 5.269862 (-2.225196) | 2.000684 / 4.565676 (-2.564992) | 0.148385 / 0.424275 (-0.275890) | 0.015175 / 0.007607 (0.007567) | 0.800839 / 0.226044 (0.574795) | 8.062099 / 2.268929 (5.793171) | 3.400980 / 55.444624 (-52.043644) | 2.639583 / 6.876477 (-4.236894) | 2.660691 / 2.142072 (0.518618) | 1.467715 / 4.805227 (-3.337512) | 0.266429 / 6.500664 (-6.234235) | 0.076981 / 0.075469 (0.001512) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621128 / 1.841788 (-0.220659) | 17.949989 / 8.074308 (9.875680) | 20.946426 / 10.191392 (10.755034) | 0.259357 / 0.680424 (-0.421067) | 0.026094 / 0.534201 (-0.508107) | 0.527840 / 0.579283 (-0.051443) | 0.629027 / 0.434364 (0.194663) | 0.603931 / 0.540337 (0.063594) | 0.711370 / 1.386936 (-0.675566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ccf01db81bb7b70f3ea97b185e345c2b1df0274 \"CML watermark\")\n" ]
1,679,563,138,000
1,679,564,819,000
1,679,564,269,000
MEMBER
null
After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See: - https://github.com/charliermarsh/ruff/releases/tag/v0.0.258 - https://github.com/charliermarsh/ruff/pull/3605 This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple values. Fix #5661
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5662/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5662", "html_url": "https://github.com/huggingface/datasets/pull/5662", "diff_url": "https://github.com/huggingface/datasets/pull/5662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5662.patch", "merged_at": "2023-03-23T09:37:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/5661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5661/comments
https://api.github.com/repos/huggingface/datasets/issues/5661/events
https://github.com/huggingface/datasets/issues/5661
1,637,129,445
I_kwDODunzps5hlJzl
5,661
CI is broken: Unnecessary `dict` comprehension
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[]
1,679,562,781,000
1,679,564,271,000
1,679,564,271,000
MEMBER
null
CI check_code_quality is broken: ``` src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`) Found 1 error. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5661/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5660/comments
https://api.github.com/repos/huggingface/datasets/issues/5660/events
https://github.com/huggingface/datasets/issues/5660
1,635,543,646
I_kwDODunzps5hfGpe
5,660
integration with imbalanced-learn
{ "login": "tansaku", "id": 30216, "node_id": "MDQ6VXNlcjMwMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/30216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tansaku", "html_url": "https://github.com/tansaku", "followers_url": "https://api.github.com/users/tansaku/followers", "following_url": "https://api.github.com/users/tansaku/following{/other_user}", "gists_url": "https://api.github.com/users/tansaku/gists{/gist_id}", "starred_url": "https://api.github.com/users/tansaku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tansaku/subscriptions", "organizations_url": "https://api.github.com/users/tansaku/orgs", "repos_url": "https://api.github.com/users/tansaku/repos", "events_url": "https://api.github.com/users/tansaku/events{/privacy}", "received_events_url": "https://api.github.com/users/tansaku/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), you just need to pass the list of rows ids and labels:\r\n\r\n```python\r\nrow_indices = list(range(len(dataset)))\r\nresampled_row_indices, _ = make_imbalance(\r\n row_indices,\r\n dataset[\"label\"],\r\n sampling_strategy={0: 25, 1: 50, 2: 50},\r\n random_state=RANDOM_STATE,\r\n)\r\n\r\nresampled_dataset = dataset.select(resampled_row_indices)\r\n```" ]
1,679,483,117,000
1,679,590,839,000
null
NONE
null
### Feature request Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets? ### Motivation I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress. ### Your contribution If I can get this working myself I can submit a PR with example code to go in the docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5660/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5659/comments
https://api.github.com/repos/huggingface/datasets/issues/5659/events
https://github.com/huggingface/datasets/issues/5659
1,635,447,540
I_kwDODunzps5hevL0
5,659
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @polinaeterna @lhoestq ", "@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. \r\nThe only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n\r\n```bash\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\ncd libsndfile/\r\nautoreconf -vif\r\n./configure --enable-werror \r\nmake\r\nmake install\r\n```\r\nfor this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n```bash\r\napt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\nbut for other Linux distributions it might be different.\r\n\r\nWhen the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n\r\n```bash\r\ncp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\ncp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n```\r\n\r\nAnother solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`. ", "Not sure if it may help, but you could also try updating `pip` before installing soundfile", "@lhoestq @sanchit-gandhi. I encountered the same error (also on the TPU v4) when trying to run `datasets` from source.\r\n\r\nDowngrading soundfile with `pip install soundfile==0.12.0` seems to fix the issue for me.", "Maybe let's open an issue at https://github.com/bastibe/python-soundfile/issues in case they might know why you get `OSError: cannot load library 'libsndfile.so'` ?", "> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n\r\nThis fixed the issue for me. After installing libsndfile as described above, I had to uninstall soundfile and re-install it with this command. `pip install \"soundfile>=0.12.1\"`", "Thank you so much for the comprehensive instructions @polinaeterna! Also confirming that they worked for me πŸ€— In my case, I had to run several of these commands under \"sudo\" for privileges, but otherwise this workaround gave a successful `libsndfile` install:\r\n\r\n1. Grab source code:\r\n```\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\n```\r\n\r\n2. Set up a build environment:\r\n```\r\nsudo apt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\n\r\n3. Build and test `libsndfile`:\r\n\r\n```\r\nautoreconf -vif\r\n./configure --enable-werror\r\nsudo make\r\nsudo make check\r\n```\r\n\r\n4. Create `_soundfile_data` submodule (if it does not exist already):\r\n```\r\nsudo mkdir /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```\r\n\r\n5. Copy `libsndfile` files into submodule:\r\n```\r\nsudo cp /usr/local/lib/libsndfile.* /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```", "On a different machine, I also tried separately by first upgrading pip, then installing soundfile. This worked too! Thanks @lhoestq πŸ™Œ", "> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n> \r\n> for this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n> \r\n> ```shell\r\n> apt install autoconf autogen automake build-essential libasound2-dev \\\r\n> libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n> libmpg123-dev pkg-config python\r\n> ```\r\n> \r\n> but for other Linux distributions it might be different.\r\n> \r\n> When the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n> \r\n> ```shell\r\n> cp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> cp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> ```\r\n> \r\n> Another solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`.\r\n\r\nThanks, the solution solved my problem. \r\n\r\n1. Purge uninstall libsndfile, uninstall python-soundfile.\r\n2. Build libsndfile from source code and install.\r\n3. Build python-soundfile from source code and install\r\n4. Well done." ]
1,679,479,653,000
1,682,652,339,000
1,680,857,488,000
CONTRIBUTOR
null
### Describe the bug I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4. The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type. The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71 However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing: ``` pip install soundfile==0.12.1 ``` Then: ```python >>> soundfile >>> soundfile.__libsndfile_version__ ``` <details> <summary> Traceback (most recent call last): </summary> ``` File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module> import _soundfile_data # ImportError if this doesn't exist ModuleNotFoundError: No module named '_soundfile_data' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module> raise OSError('sndfile library not found using ctypes.util.find_library') OSError: sndfile library not found using ctypes.util.find_library During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module> _snd = _ffi.dlopen(_explicit_libname) OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory ``` </details> Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as: ``` pip install --upgrade soundfile sudo apt install libsndfile1 ``` We can now import `soundfile`: ```python >>> import soundfile >>> soundfile.__version__ '0.12.1' >>> soundfile.__libsndfile_version__ '1.0.28' ``` We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147 But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138 Updating/upgrading the `libsndfile` doesn't change this: ``` sudo apt-get update sudo apt-get upgrade ``` Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files. Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues. ### Steps to reproduce the bug Environment described above. Loading mp3 files: ```python from datasets import load_dataset common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) print(next(iter(common_voice_es))) ``` ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[4], line 2 1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) ----> 2 print(next(iter(common_voice_es))) File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self) 937 for key, example in ex_iterable: 938 if self.features: 939 # `IterableDataset` automatically fills missing columns with None. 940 # This is done with `_apply_feature_types_on_example`. --> 941 yield _apply_feature_types_on_example( 942 example, self.features, token_per_repo_id=self._token_per_repo_id 943 ) 944 else: 945 yield example File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id) 698 encoded_example = features.encode_example(example) 699 # Decode example for Audio feature, e.g. --> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 701 return decoded_example File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ -> 1864 return { 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ 1864 return { -> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id) 1305 elif isinstance(schema, (Audio, Image)): 1306 # we pass the token to read and decode files from private repositories in streaming mode 1307 if obj is not None and schema.decode: -> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1309 return obj File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id) 162 raise RuntimeError( 163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, " 164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 165 ) 166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3": --> 167 raise RuntimeError( 168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, " 169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 170 ) 172 if file is None: 173 token_per_repo_id = token_per_repo_id or {} RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ``` ### Expected behavior Load mp3 files! ### Environment info - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Soundfile version: 0.12.1 - Libsndfile version: 1.0.28
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5659/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5658/comments
https://api.github.com/repos/huggingface/datasets/issues/5658/events
https://github.com/huggingface/datasets/pull/5658
1,634,867,204
PR_kwDODunzps5MmJe0
5,658
docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007351 / 0.011353 (-0.004002) | 0.005025 / 0.011008 (-0.005983) | 0.095978 / 0.038508 (0.057470) | 0.033486 / 0.023109 (0.010377) | 0.294427 / 0.275898 (0.018529) | 0.325157 / 0.323480 (0.001677) | 0.005671 / 0.007986 (-0.002315) | 0.005284 / 0.004328 (0.000955) | 0.073159 / 0.004250 (0.068909) | 0.045162 / 0.037052 (0.008110) | 0.294004 / 0.258489 (0.035515) | 0.343545 / 0.293841 (0.049704) | 0.036857 / 0.128546 (-0.091689) | 0.012245 / 0.075646 (-0.063401) | 0.332258 / 0.419271 (-0.087014) | 0.051909 / 0.043533 (0.008377) | 0.295701 / 0.255139 (0.040562) | 0.315247 / 0.283200 (0.032048) | 0.102363 / 0.141683 (-0.039320) | 1.441944 / 1.452155 (-0.010211) | 1.527161 / 1.492716 (0.034445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211769 / 0.018006 (0.193763) | 0.452015 / 0.000490 (0.451525) | 0.004041 / 0.000200 (0.003841) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027396 / 0.037411 (-0.010015) | 0.108318 / 0.014526 (0.093793) | 0.116851 / 0.176557 (-0.059706) | 0.172658 / 0.737135 (-0.564478) | 0.122876 / 0.296338 (-0.173462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406484 / 0.215209 (0.191275) | 4.053849 / 2.077655 (1.976194) | 1.842947 / 1.504120 (0.338827) | 1.649473 / 1.541195 (0.108278) | 1.728629 / 1.468490 (0.260139) | 0.699519 / 4.584777 (-3.885258) | 3.730823 / 3.745712 (-0.014889) | 2.139624 / 5.269862 (-3.130237) | 1.487839 / 4.565676 (-3.077837) | 0.086699 / 0.424275 (-0.337576) | 0.012815 / 0.007607 (0.005208) | 0.514014 / 0.226044 (0.287969) | 5.153315 / 2.268929 (2.884387) | 2.324431 / 55.444624 (-53.120193) | 1.971533 / 6.876477 (-4.904944) | 2.074480 / 2.142072 (-0.067592) | 0.842419 / 4.805227 (-3.962808) | 0.169140 / 6.500664 (-6.331524) | 0.065206 / 0.075469 (-0.010263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180887 / 1.841788 (-0.660901) | 14.627401 / 8.074308 (6.553093) | 14.382699 / 10.191392 (4.191307) | 0.143986 / 0.680424 (-0.536438) | 0.017460 / 0.534201 (-0.516741) | 0.422100 / 0.579283 (-0.157183) | 0.417474 / 0.434364 (-0.016890) | 0.493712 / 0.540337 (-0.046625) | 0.589744 / 1.386936 (-0.797193) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007538 / 0.011353 (-0.003815) | 0.005122 / 0.011008 (-0.005887) | 0.073858 / 0.038508 (0.035350) | 0.034561 / 0.023109 (0.011451) | 0.341250 / 0.275898 (0.065352) | 0.373063 / 0.323480 (0.049583) | 0.005785 / 0.007986 (-0.002200) | 0.005393 / 0.004328 (0.001065) | 0.072354 / 0.004250 (0.068104) | 0.047005 / 0.037052 (0.009953) | 0.341179 / 0.258489 (0.082690) | 0.386299 / 0.293841 (0.092458) | 0.038315 / 0.128546 (-0.090231) | 0.012200 / 0.075646 (-0.063446) | 0.086132 / 0.419271 (-0.333140) | 0.049873 / 0.043533 (0.006340) | 0.337985 / 0.255139 (0.082846) | 0.354806 / 0.283200 (0.071607) | 0.103557 / 0.141683 (-0.038126) | 1.445682 / 1.452155 (-0.006473) | 1.551008 / 1.492716 (0.058291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235873 / 0.018006 (0.217867) | 0.448445 / 0.000490 (0.447955) | 0.001307 / 0.000200 (0.001108) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029809 / 0.037411 (-0.007603) | 0.108833 / 0.014526 (0.094307) | 0.123289 / 0.176557 (-0.053268) | 0.176516 / 0.737135 (-0.560620) | 0.127186 / 0.296338 (-0.169153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422037 / 0.215209 (0.206828) | 4.188073 / 2.077655 (2.110418) | 1.999295 / 1.504120 (0.495175) | 1.809229 / 1.541195 (0.268034) | 1.930798 / 1.468490 (0.462308) | 0.694371 / 4.584777 (-3.890406) | 3.833432 / 3.745712 (0.087719) | 3.235600 / 5.269862 (-2.034262) | 1.867822 / 4.565676 (-2.697854) | 0.085734 / 0.424275 (-0.338541) | 0.012727 / 0.007607 (0.005120) | 0.542261 / 0.226044 (0.316217) | 5.289366 / 2.268929 (3.020437) | 2.469636 / 55.444624 (-52.974988) | 2.139392 / 6.876477 (-4.737084) | 2.193305 / 2.142072 (0.051233) | 0.846747 / 4.805227 (-3.958481) | 0.168965 / 6.500664 (-6.331699) | 0.064463 / 0.075469 (-0.011006) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263818 / 1.841788 (-0.577970) | 15.254642 / 8.074308 (7.180334) | 14.428111 / 10.191392 (4.236719) | 0.164770 / 0.680424 (-0.515654) | 0.017476 / 0.534201 (-0.516725) | 0.420198 / 0.579283 (-0.159085) | 0.443250 / 0.434364 (0.008886) | 0.496904 / 0.540337 (-0.043434) | 0.596541 / 1.386936 (-0.790395) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4db8e33eb9cf6cd4453cdfa246c065e0eedf170c \"CML watermark\")\n" ]
1,679,443,938,000
1,679,676,214,000
1,679,675,781,000
CONTRIBUTOR
null
Closes #5653 @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5658/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5658", "html_url": "https://github.com/huggingface/datasets/pull/5658", "diff_url": "https://github.com/huggingface/datasets/pull/5658.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5658.patch", "merged_at": "2023-03-24T16:36:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/5656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5656/comments
https://api.github.com/repos/huggingface/datasets/issues/5656/events
https://github.com/huggingface/datasets/pull/5656
1,634,156,563
PR_kwDODunzps5Mjxoo
5,656
Fix `fsspec.open` when using an HTTP proxy
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.005351 / 0.011008 (-0.005657) | 0.096325 / 0.038508 (0.057817) | 0.034204 / 0.023109 (0.011095) | 0.328080 / 0.275898 (0.052182) | 0.361519 / 0.323480 (0.038039) | 0.005954 / 0.007986 (-0.002032) | 0.004106 / 0.004328 (-0.000222) | 0.072827 / 0.004250 (0.068576) | 0.050522 / 0.037052 (0.013470) | 0.326975 / 0.258489 (0.068486) | 0.373180 / 0.293841 (0.079339) | 0.037024 / 0.128546 (-0.091522) | 0.012347 / 0.075646 (-0.063299) | 0.332341 / 0.419271 (-0.086931) | 0.050695 / 0.043533 (0.007162) | 0.328298 / 0.255139 (0.073159) | 0.352808 / 0.283200 (0.069608) | 0.101637 / 0.141683 (-0.040046) | 1.435172 / 1.452155 (-0.016982) | 1.529797 / 1.492716 (0.037080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305727 / 0.018006 (0.287721) | 0.583951 / 0.000490 (0.583462) | 0.011699 / 0.000200 (0.011499) | 0.000345 / 0.000054 (0.000290) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027917 / 0.037411 (-0.009495) | 0.107698 / 0.014526 (0.093173) | 0.120572 / 0.176557 (-0.055985) | 0.176066 / 0.737135 (-0.561069) | 0.125348 / 0.296338 (-0.170991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411980 / 0.215209 (0.196771) | 4.113135 / 2.077655 (2.035480) | 1.868725 / 1.504120 (0.364605) | 1.677422 / 1.541195 (0.136227) | 1.796759 / 1.468490 (0.328269) | 0.701957 / 4.584777 (-3.882820) | 3.830742 / 3.745712 (0.085030) | 2.170444 / 5.269862 (-3.099418) | 1.345097 / 4.565676 (-3.220580) | 0.086661 / 0.424275 (-0.337614) | 0.013073 / 0.007607 (0.005466) | 0.519150 / 0.226044 (0.293106) | 5.193447 / 2.268929 (2.924518) | 2.391155 / 55.444624 (-53.053470) | 2.076610 / 6.876477 (-4.799867) | 2.245557 / 2.142072 (0.103484) | 0.846496 / 4.805227 (-3.958731) | 0.169246 / 6.500664 (-6.331418) | 0.066360 / 0.075469 (-0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196344 / 1.841788 (-0.645444) | 15.640363 / 8.074308 (7.566055) | 14.936144 / 10.191392 (4.744752) | 0.163613 / 0.680424 (-0.516811) | 0.017900 / 0.534201 (-0.516301) | 0.425377 / 0.579283 (-0.153906) | 0.431119 / 0.434364 (-0.003245) | 0.513669 / 0.540337 (-0.026669) | 0.592970 / 1.386936 (-0.793966) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007958 / 0.011353 (-0.003395) | 0.005707 / 0.011008 (-0.005301) | 0.075377 / 0.038508 (0.036869) | 0.037126 / 0.023109 (0.014016) | 0.344589 / 0.275898 (0.068691) | 0.381060 / 0.323480 (0.057580) | 0.006592 / 0.007986 (-0.001393) | 0.004479 / 0.004328 (0.000151) | 0.074456 / 0.004250 (0.070206) | 0.054087 / 0.037052 (0.017035) | 0.344942 / 0.258489 (0.086453) | 0.393174 / 0.293841 (0.099333) | 0.037926 / 0.128546 (-0.090620) | 0.012638 / 0.075646 (-0.063009) | 0.087743 / 0.419271 (-0.331529) | 0.050081 / 0.043533 (0.006548) | 0.340406 / 0.255139 (0.085267) | 0.361487 / 0.283200 (0.078287) | 0.108546 / 0.141683 (-0.033137) | 1.424626 / 1.452155 (-0.027529) | 1.553958 / 1.492716 (0.061242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329922 / 0.018006 (0.311916) | 0.523239 / 0.000490 (0.522749) | 0.012164 / 0.000200 (0.011964) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031935 / 0.037411 (-0.005477) | 0.115680 / 0.014526 (0.101154) | 0.130062 / 0.176557 (-0.046494) | 0.180679 / 0.737135 (-0.556457) | 0.135548 / 0.296338 (-0.160790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429648 / 0.215209 (0.214439) | 4.303342 / 2.077655 (2.225687) | 1.999395 / 1.504120 (0.495275) | 1.810354 / 1.541195 (0.269160) | 1.963132 / 1.468490 (0.494642) | 0.701654 / 4.584777 (-3.883122) | 3.844687 / 3.745712 (0.098975) | 2.153425 / 5.269862 (-3.116436) | 1.351541 / 4.565676 (-3.214135) | 0.086292 / 0.424275 (-0.337983) | 0.012491 / 0.007607 (0.004883) | 0.523144 / 0.226044 (0.297099) | 5.243283 / 2.268929 (2.974355) | 2.465849 / 55.444624 (-52.978775) | 2.154505 / 6.876477 (-4.721972) | 2.245500 / 2.142072 (0.103428) | 0.838902 / 4.805227 (-3.966326) | 0.169441 / 6.500664 (-6.331223) | 0.065631 / 0.075469 (-0.009838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262175 / 1.841788 (-0.579612) | 15.424650 / 8.074308 (7.350342) | 15.000718 / 10.191392 (4.809326) | 0.186328 / 0.680424 (-0.494096) | 0.018076 / 0.534201 (-0.516125) | 0.433458 / 0.579283 (-0.145825) | 0.424213 / 0.434364 (-0.010151) | 0.546568 / 0.540337 (0.006231) | 0.643529 / 1.386936 (-0.743407) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea7298bf121d7ae8079f0a59deb67c2fa1d4df6a \"CML watermark\")\n" ]
1,679,412,209,000
1,679,580,890,000
1,679,577,346,000
CONTRIBUTOR
null
Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't support reading proxy env variables by default. This PR enables reading them automatically. Read [aiohttp docs on using proxies](https://docs.aiohttp.org/en/stable/client_advanced.html?highlight=trust_env#proxy-support). For context, [the Python library requests](https://requests.readthedocs.io/en/latest/user/advanced/?highlight=http_proxy#proxies) and [the official Python library via `urllib.urlopen` support this automatically by default](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen). Many (most common ones?) programs also do the same, including cURL, APT, Wget, and many others.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5656/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5656", "html_url": "https://github.com/huggingface/datasets/pull/5656", "diff_url": "https://github.com/huggingface/datasets/pull/5656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5656.patch", "merged_at": "2023-03-23T13:15:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/5655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5655/comments
https://api.github.com/repos/huggingface/datasets/issues/5655/events
https://github.com/huggingface/datasets/pull/5655
1,634,030,017
PR_kwDODunzps5MjWYy
5,655
Improve features decoding in to_iterable_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009691 / 0.011353 (-0.001662) | 0.006160 / 0.011008 (-0.004848) | 0.127528 / 0.038508 (0.089020) | 0.034445 / 0.023109 (0.011335) | 0.391483 / 0.275898 (0.115585) | 0.425922 / 0.323480 (0.102442) | 0.006621 / 0.007986 (-0.001365) | 0.004550 / 0.004328 (0.000221) | 0.099134 / 0.004250 (0.094884) | 0.051089 / 0.037052 (0.014037) | 0.398675 / 0.258489 (0.140186) | 0.456740 / 0.293841 (0.162899) | 0.052279 / 0.128546 (-0.076267) | 0.020878 / 0.075646 (-0.054768) | 0.414954 / 0.419271 (-0.004317) | 0.061903 / 0.043533 (0.018370) | 0.393088 / 0.255139 (0.137949) | 0.410289 / 0.283200 (0.127089) | 0.101684 / 0.141683 (-0.039998) | 1.747102 / 1.452155 (0.294947) | 1.896976 / 1.492716 (0.404260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203193 / 0.018006 (0.185187) | 0.495011 / 0.000490 (0.494521) | 0.006290 / 0.000200 (0.006090) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034840 / 0.037411 (-0.002571) | 0.122529 / 0.014526 (0.108003) | 0.133870 / 0.176557 (-0.042686) | 0.207771 / 0.737135 (-0.529364) | 0.141441 / 0.296338 (-0.154897) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604190 / 0.215209 (0.388981) | 6.040295 / 2.077655 (3.962641) | 2.405703 / 1.504120 (0.901583) | 2.062767 / 1.541195 (0.521572) | 2.079313 / 1.468490 (0.610823) | 1.240107 / 4.584777 (-3.344670) | 5.316583 / 3.745712 (1.570871) | 3.104758 / 5.269862 (-2.165103) | 2.056489 / 4.565676 (-2.509187) | 0.149060 / 0.424275 (-0.275215) | 0.014467 / 0.007607 (0.006860) | 0.736882 / 0.226044 (0.510838) | 7.324142 / 2.268929 (5.055213) | 3.048752 / 55.444624 (-52.395872) | 2.385013 / 6.876477 (-4.491463) | 2.457478 / 2.142072 (0.315405) | 1.459276 / 4.805227 (-3.345951) | 0.253882 / 6.500664 (-6.246782) | 0.076756 / 0.075469 (0.001287) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499166 / 1.841788 (-0.342622) | 17.294165 / 8.074308 (9.219857) | 20.385668 / 10.191392 (10.194276) | 0.254633 / 0.680424 (-0.425791) | 0.026253 / 0.534201 (-0.507948) | 0.532928 / 0.579283 (-0.046355) | 0.606095 / 0.434364 (0.171731) | 0.615025 / 0.540337 (0.074687) | 0.728651 / 1.386936 (-0.658285) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009376 / 0.011353 (-0.001977) | 0.005981 / 0.011008 (-0.005027) | 0.109898 / 0.038508 (0.071390) | 0.033746 / 0.023109 (0.010637) | 0.410226 / 0.275898 (0.134328) | 0.470606 / 0.323480 (0.147126) | 0.006706 / 0.007986 (-0.001279) | 0.004482 / 0.004328 (0.000153) | 0.092280 / 0.004250 (0.088030) | 0.047988 / 0.037052 (0.010935) | 0.430628 / 0.258489 (0.172139) | 0.480668 / 0.293841 (0.186827) | 0.052099 / 0.128546 (-0.076447) | 0.018743 / 0.075646 (-0.056903) | 0.112204 / 0.419271 (-0.307068) | 0.059838 / 0.043533 (0.016305) | 0.418230 / 0.255139 (0.163091) | 0.451568 / 0.283200 (0.168368) | 0.107026 / 0.141683 (-0.034657) | 1.708111 / 1.452155 (0.255956) | 1.839268 / 1.492716 (0.346552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229558 / 0.018006 (0.211552) | 0.488099 / 0.000490 (0.487609) | 0.004643 / 0.000200 (0.004443) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030461 / 0.037411 (-0.006951) | 0.120993 / 0.014526 (0.106467) | 0.130874 / 0.176557 (-0.045682) | 0.193550 / 0.737135 (-0.543585) | 0.138164 / 0.296338 (-0.158174) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635709 / 0.215209 (0.420500) | 6.225112 / 2.077655 (4.147457) | 2.639584 / 1.504120 (1.135465) | 2.254487 / 1.541195 (0.713293) | 2.280478 / 1.468490 (0.811988) | 1.205712 / 4.584777 (-3.379065) | 5.367845 / 3.745712 (1.622133) | 3.020207 / 5.269862 (-2.249655) | 2.001897 / 4.565676 (-2.563779) | 0.149582 / 0.424275 (-0.274693) | 0.014867 / 0.007607 (0.007260) | 0.759050 / 0.226044 (0.533006) | 7.692969 / 2.268929 (5.424041) | 3.274009 / 55.444624 (-52.170615) | 2.635529 / 6.876477 (-4.240948) | 2.672960 / 2.142072 (0.530888) | 1.426487 / 4.805227 (-3.378740) | 0.253368 / 6.500664 (-6.247296) | 0.078650 / 0.075469 (0.003181) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620265 / 1.841788 (-0.221523) | 17.674168 / 8.074308 (9.599860) | 21.120528 / 10.191392 (10.929136) | 0.244205 / 0.680424 (-0.436218) | 0.029646 / 0.534201 (-0.504555) | 0.510948 / 0.579283 (-0.068335) | 0.586255 / 0.434364 (0.151891) | 0.589286 / 0.540337 (0.048949) | 0.736561 / 1.386936 (-0.650375) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de5fe9ae5df84c489e08dcbdc3d2d20272b312c3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007778 / 0.011353 (-0.003575) | 0.005432 / 0.011008 (-0.005577) | 0.098776 / 0.038508 (0.060268) | 0.035196 / 0.023109 (0.012087) | 0.305646 / 0.275898 (0.029748) | 0.342661 / 0.323480 (0.019181) | 0.006513 / 0.007986 (-0.001472) | 0.005897 / 0.004328 (0.001568) | 0.075797 / 0.004250 (0.071547) | 0.056060 / 0.037052 (0.019007) | 0.306645 / 0.258489 (0.048156) | 0.352447 / 0.293841 (0.058606) | 0.037304 / 0.128546 (-0.091242) | 0.012514 / 0.075646 (-0.063132) | 0.334949 / 0.419271 (-0.084323) | 0.051600 / 0.043533 (0.008067) | 0.302302 / 0.255139 (0.047163) | 0.322238 / 0.283200 (0.039038) | 0.106896 / 0.141683 (-0.034787) | 1.483163 / 1.452155 (0.031008) | 1.587483 / 1.492716 (0.094767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292318 / 0.018006 (0.274312) | 0.541541 / 0.000490 (0.541051) | 0.008342 / 0.000200 (0.008142) | 0.000339 / 0.000054 (0.000285) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028287 / 0.037411 (-0.009124) | 0.107775 / 0.014526 (0.093250) | 0.119112 / 0.176557 (-0.057445) | 0.174002 / 0.737135 (-0.563134) | 0.126531 / 0.296338 (-0.169808) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401684 / 0.215209 (0.186475) | 4.024708 / 2.077655 (1.947053) | 1.812763 / 1.504120 (0.308643) | 1.629540 / 1.541195 (0.088345) | 1.731733 / 1.468490 (0.263243) | 0.711066 / 4.584777 (-3.873711) | 3.867499 / 3.745712 (0.121786) | 3.615968 / 5.269862 (-1.653893) | 1.876077 / 4.565676 (-2.689600) | 0.087003 / 0.424275 (-0.337272) | 0.012445 / 0.007607 (0.004838) | 0.499106 / 0.226044 (0.273061) | 4.975920 / 2.268929 (2.706992) | 2.279074 / 55.444624 (-53.165550) | 1.952311 / 6.876477 (-4.924166) | 2.167480 / 2.142072 (0.025408) | 0.855882 / 4.805227 (-3.949346) | 0.171378 / 6.500664 (-6.329287) | 0.066731 / 0.075469 (-0.008738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184226 / 1.841788 (-0.657561) | 15.383396 / 8.074308 (7.309088) | 15.069783 / 10.191392 (4.878391) | 0.161489 / 0.680424 (-0.518935) | 0.017763 / 0.534201 (-0.516438) | 0.427103 / 0.579283 (-0.152180) | 0.434295 / 0.434364 (-0.000069) | 0.496848 / 0.540337 (-0.043489) | 0.592572 / 1.386936 (-0.794364) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008014 / 0.011353 (-0.003339) | 0.005607 / 0.011008 (-0.005401) | 0.076826 / 0.038508 (0.038318) | 0.035283 / 0.023109 (0.012174) | 0.347809 / 0.275898 (0.071911) | 0.382482 / 0.323480 (0.059003) | 0.006276 / 0.007986 (-0.001709) | 0.005978 / 0.004328 (0.001650) | 0.074938 / 0.004250 (0.070687) | 0.054323 / 0.037052 (0.017271) | 0.344027 / 0.258489 (0.085538) | 0.397623 / 0.293841 (0.103783) | 0.037851 / 0.128546 (-0.090695) | 0.012649 / 0.075646 (-0.062997) | 0.086169 / 0.419271 (-0.333103) | 0.051510 / 0.043533 (0.007977) | 0.341112 / 0.255139 (0.085973) | 0.357957 / 0.283200 (0.074757) | 0.110949 / 0.141683 (-0.030734) | 1.479573 / 1.452155 (0.027419) | 1.578572 / 1.492716 (0.085855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.310678 / 0.018006 (0.292672) | 0.525504 / 0.000490 (0.525015) | 0.000447 / 0.000200 (0.000247) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031262 / 0.037411 (-0.006149) | 0.113801 / 0.014526 (0.099275) | 0.124967 / 0.176557 (-0.051590) | 0.175226 / 0.737135 (-0.561909) | 0.129377 / 0.296338 (-0.166962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420672 / 0.215209 (0.205463) | 4.181337 / 2.077655 (2.103682) | 1.985524 / 1.504120 (0.481404) | 1.803468 / 1.541195 (0.262273) | 1.952915 / 1.468490 (0.484425) | 0.710928 / 4.584777 (-3.873849) | 3.886245 / 3.745712 (0.140533) | 3.737837 / 5.269862 (-1.532024) | 1.806859 / 4.565676 (-2.758818) | 0.088461 / 0.424275 (-0.335814) | 0.013125 / 0.007607 (0.005518) | 0.522410 / 0.226044 (0.296365) | 5.232591 / 2.268929 (2.963663) | 2.451188 / 55.444624 (-52.993437) | 2.127725 / 6.876477 (-4.748751) | 2.232859 / 2.142072 (0.090786) | 0.854257 / 4.805227 (-3.950970) | 0.171004 / 6.500664 (-6.329661) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257700 / 1.841788 (-0.584088) | 15.738605 / 8.074308 (7.664297) | 15.021698 / 10.191392 (4.830306) | 0.147422 / 0.680424 (-0.533002) | 0.017928 / 0.534201 (-0.516273) | 0.428121 / 0.579283 (-0.151162) | 0.432056 / 0.434364 (-0.002308) | 0.498318 / 0.540337 (-0.042020) | 0.591040 / 1.386936 (-0.795896) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ac74267032ef3608779a8c8c4361b95a83ecbcb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007014 / 0.011353 (-0.004339) | 0.004792 / 0.011008 (-0.006216) | 0.099822 / 0.038508 (0.061314) | 0.029333 / 0.023109 (0.006224) | 0.306453 / 0.275898 (0.030555) | 0.344598 / 0.323480 (0.021118) | 0.005121 / 0.007986 (-0.002865) | 0.004850 / 0.004328 (0.000522) | 0.076668 / 0.004250 (0.072417) | 0.039980 / 0.037052 (0.002927) | 0.312276 / 0.258489 (0.053787) | 0.354722 / 0.293841 (0.060881) | 0.031653 / 0.128546 (-0.096893) | 0.011743 / 0.075646 (-0.063903) | 0.322998 / 0.419271 (-0.096274) | 0.042813 / 0.043533 (-0.000720) | 0.308855 / 0.255139 (0.053716) | 0.332650 / 0.283200 (0.049451) | 0.087155 / 0.141683 (-0.054528) | 1.454946 / 1.452155 (0.002791) | 1.550589 / 1.492716 (0.057873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192921 / 0.018006 (0.174914) | 0.411155 / 0.000490 (0.410666) | 0.004779 / 0.000200 (0.004579) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024462 / 0.037411 (-0.012950) | 0.100320 / 0.014526 (0.085794) | 0.105509 / 0.176557 (-0.071048) | 0.168533 / 0.737135 (-0.568602) | 0.110018 / 0.296338 (-0.186321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415025 / 0.215209 (0.199816) | 4.144583 / 2.077655 (2.066928) | 1.871627 / 1.504120 (0.367507) | 1.671638 / 1.541195 (0.130443) | 1.734458 / 1.468490 (0.265968) | 0.693435 / 4.584777 (-3.891342) | 3.487999 / 3.745712 (-0.257713) | 3.196553 / 5.269862 (-2.073308) | 1.628499 / 4.565676 (-2.937178) | 0.082999 / 0.424275 (-0.341276) | 0.012822 / 0.007607 (0.005215) | 0.514904 / 0.226044 (0.288860) | 5.157525 / 2.268929 (2.888596) | 2.313093 / 55.444624 (-53.131531) | 1.968335 / 6.876477 (-4.908142) | 2.083462 / 2.142072 (-0.058610) | 0.804485 / 4.805227 (-4.000742) | 0.152290 / 6.500664 (-6.348374) | 0.066813 / 0.075469 (-0.008656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210370 / 1.841788 (-0.631418) | 14.261779 / 8.074308 (6.187471) | 14.268121 / 10.191392 (4.076729) | 0.149216 / 0.680424 (-0.531207) | 0.016529 / 0.534201 (-0.517672) | 0.378814 / 0.579283 (-0.200469) | 0.386304 / 0.434364 (-0.048060) | 0.439653 / 0.540337 (-0.100684) | 0.523658 / 1.386936 (-0.863278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006979 / 0.011353 (-0.004374) | 0.004718 / 0.011008 (-0.006290) | 0.077023 / 0.038508 (0.038514) | 0.029080 / 0.023109 (0.005971) | 0.343145 / 0.275898 (0.067247) | 0.380633 / 0.323480 (0.057153) | 0.006057 / 0.007986 (-0.001928) | 0.003541 / 0.004328 (-0.000788) | 0.075773 / 0.004250 (0.071523) | 0.039112 / 0.037052 (0.002060) | 0.342355 / 0.258489 (0.083866) | 0.386002 / 0.293841 (0.092161) | 0.033238 / 0.128546 (-0.095308) | 0.011696 / 0.075646 (-0.063950) | 0.086178 / 0.419271 (-0.333093) | 0.045219 / 0.043533 (0.001686) | 0.360710 / 0.255139 (0.105571) | 0.367490 / 0.283200 (0.084290) | 0.093041 / 0.141683 (-0.048642) | 1.523670 / 1.452155 (0.071516) | 1.595280 / 1.492716 (0.102564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235888 / 0.018006 (0.217882) | 0.410205 / 0.000490 (0.409715) | 0.000405 / 0.000200 (0.000205) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025752 / 0.037411 (-0.011659) | 0.103343 / 0.014526 (0.088818) | 0.108722 / 0.176557 (-0.067834) | 0.159241 / 0.737135 (-0.577894) | 0.113684 / 0.296338 (-0.182654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441809 / 0.215209 (0.226600) | 4.410893 / 2.077655 (2.333238) | 2.104061 / 1.504120 (0.599941) | 1.854016 / 1.541195 (0.312821) | 1.947100 / 1.468490 (0.478610) | 0.697682 / 4.584777 (-3.887095) | 3.467513 / 3.745712 (-0.278199) | 1.911603 / 5.269862 (-3.358258) | 1.187479 / 4.565676 (-3.378197) | 0.083153 / 0.424275 (-0.341122) | 0.012651 / 0.007607 (0.005044) | 0.542081 / 0.226044 (0.316036) | 5.444622 / 2.268929 (3.175693) | 2.524236 / 55.444624 (-52.920388) | 2.190463 / 6.876477 (-4.686014) | 2.265764 / 2.142072 (0.123691) | 0.810778 / 4.805227 (-3.994450) | 0.152459 / 6.500664 (-6.348205) | 0.067815 / 0.075469 (-0.007654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334388 / 1.841788 (-0.507400) | 14.640459 / 8.074308 (6.566151) | 14.714874 / 10.191392 (4.523482) | 0.153479 / 0.680424 (-0.526945) | 0.016709 / 0.534201 (-0.517492) | 0.379427 / 0.579283 (-0.199856) | 0.391602 / 0.434364 (-0.042762) | 0.438297 / 0.540337 (-0.102041) | 0.524170 / 1.386936 (-0.862766) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b277cef5cb56c0c506eda082fb69fddb839156a1 \"CML watermark\")\n" ]
1,679,408,289,000
1,679,577,567,000
1,679,577,145,000
MEMBER
null
Following discussion at https://github.com/huggingface/datasets/pull/5589 Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily). I fixed it by providing a generator that yields undecoded examples
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5655/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5655", "html_url": "https://github.com/huggingface/datasets/pull/5655", "diff_url": "https://github.com/huggingface/datasets/pull/5655.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5655.patch", "merged_at": "2023-03-23T13:12:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/5654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
https://api.github.com/repos/huggingface/datasets/issues/5654/events
https://github.com/huggingface/datasets/issues/5654
1,633,523,705
I_kwDODunzps5hXZf5
5,654
Offset overflow when executing Dataset.map
{ "login": "jan-pair", "id": 118280608, "node_id": "U_kgDOBwzRoA", "avatar_url": "https://avatars.githubusercontent.com/u/118280608?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jan-pair", "html_url": "https://github.com/jan-pair", "followers_url": "https://api.github.com/users/jan-pair/followers", "following_url": "https://api.github.com/users/jan-pair/following{/other_user}", "gists_url": "https://api.github.com/users/jan-pair/gists{/gist_id}", "starred_url": "https://api.github.com/users/jan-pair/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jan-pair/subscriptions", "organizations_url": "https://api.github.com/users/jan-pair/orgs", "repos_url": "https://api.github.com/users/jan-pair/repos", "events_url": "https://api.github.com/users/jan-pair/events{/privacy}", "received_events_url": "https://api.github.com/users/jan-pair/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n", "As a workaround, one can replace\r\n`return {\"hr\": torch.stack([crop_transf(tensor) for _ in range(25)])}`\r\nwith\r\n`return {f\"hr_crop_{i}\": crop_transf(tensor) for i in range(25)}`\r\nand then choose appropriate crop randomly in further processing, but I still don't understand why the original approach doesn't work(\r\n" ]
1,679,391,207,000
1,679,394,727,000
null
NONE
null
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize self.write_examples_on_file() File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate): ### Steps to reproduce the bug ```python from glob import glob import torch from datasets import Dataset, Image from torchvision.transforms import PILToTensor, RandomCrop file_paths = glob("/home/datasets/DIV2K_train_HR/*") to_tensor = PILToTensor() crop_transf = RandomCrop(size=256) def prepare_data(example): tensor = to_tensor(example["image"].convert("RGB")) return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])} train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image()) train_data = train_data.map( prepare_data, cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp", desc="Caching multiple random crops of image", remove_columns="image", ) print(train_data[0].keys(), train_data[0]["hr"].shape) ``` ### Expected behavior Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])` ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Pytorch version: 2.0.0+cu117 - torchvision version: 0.15.1+cu117
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5653/comments
https://api.github.com/repos/huggingface/datasets/issues/5653/events
https://github.com/huggingface/datasets/issues/5653
1,633,254,159
I_kwDODunzps5hWXsP
5,653
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
{ "login": "RmZeta2718", "id": 42400165, "node_id": "MDQ6VXNlcjQyNDAwMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RmZeta2718", "html_url": "https://github.com/RmZeta2718", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
[ "I agree this should be documented" ]
1,679,376,335,000
1,679,675,783,000
1,679,675,783,000
NONE
null
### Describe the bug [`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented ### Steps to reproduce the bug Nothing to reproduce ### Expected behavior [document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`. ### Environment info datasets main document
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5653/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5652/comments
https://api.github.com/repos/huggingface/datasets/issues/5652/events
https://github.com/huggingface/datasets/pull/5652
1,632,546,073
PR_kwDODunzps5MeVUR
5,652
Copy features
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007455 / 0.011353 (-0.003898) | 0.005278 / 0.011008 (-0.005731) | 0.098981 / 0.038508 (0.060473) | 0.029208 / 0.023109 (0.006099) | 0.304132 / 0.275898 (0.028234) | 0.340010 / 0.323480 (0.016530) | 0.005514 / 0.007986 (-0.002472) | 0.003636 / 0.004328 (-0.000692) | 0.076737 / 0.004250 (0.072486) | 0.041985 / 0.037052 (0.004933) | 0.314941 / 0.258489 (0.056452) | 0.346686 / 0.293841 (0.052845) | 0.032528 / 0.128546 (-0.096018) | 0.011795 / 0.075646 (-0.063851) | 0.322122 / 0.419271 (-0.097150) | 0.051548 / 0.043533 (0.008015) | 0.310561 / 0.255139 (0.055422) | 0.329443 / 0.283200 (0.046243) | 0.092820 / 0.141683 (-0.048863) | 1.495764 / 1.452155 (0.043609) | 1.586734 / 1.492716 (0.094018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195830 / 0.018006 (0.177824) | 0.422075 / 0.000490 (0.421586) | 0.005483 / 0.000200 (0.005283) | 0.000133 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023468 / 0.037411 (-0.013943) | 0.097713 / 0.014526 (0.083187) | 0.105331 / 0.176557 (-0.071225) | 0.166237 / 0.737135 (-0.570898) | 0.108924 / 0.296338 (-0.187415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671901 / 0.215209 (0.456692) | 6.745691 / 2.077655 (4.668036) | 2.132508 / 1.504120 (0.628388) | 1.693808 / 1.541195 (0.152614) | 1.715282 / 1.468490 (0.246792) | 0.955354 / 4.584777 (-3.629422) | 3.810296 / 3.745712 (0.064584) | 2.214891 / 5.269862 (-3.054970) | 1.461513 / 4.565676 (-3.104164) | 0.109846 / 0.424275 (-0.314430) | 0.013546 / 0.007607 (0.005939) | 0.780046 / 0.226044 (0.554001) | 7.789020 / 2.268929 (5.520091) | 2.602411 / 55.444624 (-52.842213) | 1.995096 / 6.876477 (-4.881380) | 2.009022 / 2.142072 (-0.133051) | 1.069215 / 4.805227 (-3.736012) | 0.179812 / 6.500664 (-6.320852) | 0.068125 / 0.075469 (-0.007344) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201866 / 1.841788 (-0.639921) | 13.878814 / 8.074308 (5.804506) | 14.179264 / 10.191392 (3.987872) | 0.128908 / 0.680424 (-0.551515) | 0.017257 / 0.534201 (-0.516944) | 0.379500 / 0.579283 (-0.199783) | 0.393308 / 0.434364 (-0.041056) | 0.444700 / 0.540337 (-0.095638) | 0.531043 / 1.386936 (-0.855893) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007413 / 0.011353 (-0.003940) | 0.005431 / 0.011008 (-0.005577) | 0.078158 / 0.038508 (0.039650) | 0.028837 / 0.023109 (0.005728) | 0.343635 / 0.275898 (0.067737) | 0.383041 / 0.323480 (0.059561) | 0.005283 / 0.007986 (-0.002703) | 0.003673 / 0.004328 (-0.000655) | 0.076461 / 0.004250 (0.072211) | 0.038625 / 0.037052 (0.001573) | 0.341109 / 0.258489 (0.082620) | 0.387027 / 0.293841 (0.093186) | 0.032512 / 0.128546 (-0.096034) | 0.011903 / 0.075646 (-0.063744) | 0.086340 / 0.419271 (-0.332931) | 0.043211 / 0.043533 (-0.000321) | 0.339994 / 0.255139 (0.084855) | 0.370868 / 0.283200 (0.087668) | 0.091679 / 0.141683 (-0.050004) | 1.547188 / 1.452155 (0.095033) | 1.578545 / 1.492716 (0.085829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216981 / 0.018006 (0.198975) | 0.412206 / 0.000490 (0.411716) | 0.004243 / 0.000200 (0.004043) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025392 / 0.037411 (-0.012020) | 0.102577 / 0.014526 (0.088051) | 0.107672 / 0.176557 (-0.068884) | 0.160657 / 0.737135 (-0.576478) | 0.111646 / 0.296338 (-0.184692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.698815 / 0.215209 (0.483606) | 6.958931 / 2.077655 (4.881276) | 2.344216 / 1.504120 (0.840096) | 1.907752 / 1.541195 (0.366557) | 1.964251 / 1.468490 (0.495761) | 0.950754 / 4.584777 (-3.634023) | 3.829700 / 3.745712 (0.083988) | 3.055565 / 5.269862 (-2.214297) | 1.575851 / 4.565676 (-2.989825) | 0.109227 / 0.424275 (-0.315048) | 0.013163 / 0.007607 (0.005556) | 0.804613 / 0.226044 (0.578569) | 8.015035 / 2.268929 (5.746107) | 2.796358 / 55.444624 (-52.648266) | 2.212561 / 6.876477 (-4.663916) | 2.229918 / 2.142072 (0.087845) | 1.062041 / 4.805227 (-3.743186) | 0.181384 / 6.500664 (-6.319280) | 0.068564 / 0.075469 (-0.006905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.287904 / 1.841788 (-0.553884) | 14.539222 / 8.074308 (6.464914) | 14.232097 / 10.191392 (4.040705) | 0.130870 / 0.680424 (-0.549554) | 0.016710 / 0.534201 (-0.517491) | 0.384454 / 0.579283 (-0.194829) | 0.391750 / 0.434364 (-0.042614) | 0.443995 / 0.540337 (-0.096343) | 0.526255 / 1.386936 (-0.860681) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bd46874a580b888bdc82b53daace79884f04bc62 \"CML watermark\")\n", "Arf I need to fix some tests first - sorry", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008393 / 0.011353 (-0.002959) | 0.005635 / 0.011008 (-0.005373) | 0.114840 / 0.038508 (0.076332) | 0.039272 / 0.023109 (0.016163) | 0.352116 / 0.275898 (0.076218) | 0.386614 / 0.323480 (0.063134) | 0.006348 / 0.007986 (-0.001638) | 0.005872 / 0.004328 (0.001544) | 0.086437 / 0.004250 (0.082187) | 0.054003 / 0.037052 (0.016951) | 0.350302 / 0.258489 (0.091813) | 0.400148 / 0.293841 (0.106308) | 0.042436 / 0.128546 (-0.086111) | 0.013987 / 0.075646 (-0.061660) | 0.399434 / 0.419271 (-0.019837) | 0.059223 / 0.043533 (0.015690) | 0.354511 / 0.255139 (0.099372) | 0.377764 / 0.283200 (0.094564) | 0.112297 / 0.141683 (-0.029386) | 1.677483 / 1.452155 (0.225328) | 1.784942 / 1.492716 (0.292226) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233334 / 0.018006 (0.215328) | 0.450575 / 0.000490 (0.450085) | 0.000376 / 0.000200 (0.000176) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031995 / 0.037411 (-0.005416) | 0.126798 / 0.014526 (0.112272) | 0.138453 / 0.176557 (-0.038104) | 0.207360 / 0.737135 (-0.529775) | 0.147744 / 0.296338 (-0.148594) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.481496 / 0.215209 (0.266287) | 4.810495 / 2.077655 (2.732840) | 2.457917 / 1.504120 (0.953797) | 2.300073 / 1.541195 (0.758879) | 2.065595 / 1.468490 (0.597105) | 0.814589 / 4.584777 (-3.770188) | 4.566496 / 3.745712 (0.820784) | 2.386947 / 5.269862 (-2.882914) | 1.531639 / 4.565676 (-3.034037) | 0.099569 / 0.424275 (-0.324706) | 0.014971 / 0.007607 (0.007364) | 0.590359 / 0.226044 (0.364314) | 5.885250 / 2.268929 (3.616322) | 2.706799 / 55.444624 (-52.737826) | 2.324485 / 6.876477 (-4.551992) | 2.452751 / 2.142072 (0.310678) | 0.966955 / 4.805227 (-3.838272) | 0.198165 / 6.500664 (-6.302499) | 0.076877 / 0.075469 (0.001408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499085 / 1.841788 (-0.342702) | 17.705516 / 8.074308 (9.631208) | 16.481174 / 10.191392 (6.289782) | 0.191832 / 0.680424 (-0.488592) | 0.021417 / 0.534201 (-0.512784) | 0.519647 / 0.579283 (-0.059636) | 0.498432 / 0.434364 (0.064068) | 0.598206 / 0.540337 (0.057868) | 0.700990 / 1.386936 (-0.685946) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008746 / 0.011353 (-0.002607) | 0.006052 / 0.011008 (-0.004956) | 0.092938 / 0.038508 (0.054430) | 0.038932 / 0.023109 (0.015823) | 0.406919 / 0.275898 (0.131021) | 0.444325 / 0.323480 (0.120845) | 0.006735 / 0.007986 (-0.001251) | 0.005972 / 0.004328 (0.001643) | 0.088152 / 0.004250 (0.083902) | 0.051009 / 0.037052 (0.013957) | 0.407415 / 0.258489 (0.148926) | 0.481048 / 0.293841 (0.187207) | 0.043268 / 0.128546 (-0.085278) | 0.014574 / 0.075646 (-0.061072) | 0.103555 / 0.419271 (-0.315716) | 0.058251 / 0.043533 (0.014719) | 0.406294 / 0.255139 (0.151155) | 0.429229 / 0.283200 (0.146029) | 0.116977 / 0.141683 (-0.024705) | 1.765885 / 1.452155 (0.313730) | 1.885557 / 1.492716 (0.392841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284014 / 0.018006 (0.266008) | 0.458066 / 0.000490 (0.457576) | 0.022286 / 0.000200 (0.022086) | 0.000158 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033971 / 0.037411 (-0.003440) | 0.132030 / 0.014526 (0.117504) | 0.141725 / 0.176557 (-0.034831) | 0.199818 / 0.737135 (-0.537318) | 0.149176 / 0.296338 (-0.147162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.511463 / 0.215209 (0.296254) | 4.917921 / 2.077655 (2.840267) | 2.382377 / 1.504120 (0.878257) | 2.154599 / 1.541195 (0.613404) | 2.247858 / 1.468490 (0.779368) | 0.834524 / 4.584777 (-3.750253) | 4.560010 / 3.745712 (0.814297) | 2.403055 / 5.269862 (-2.866806) | 1.780784 / 4.565676 (-2.784893) | 0.101409 / 0.424275 (-0.322866) | 0.014657 / 0.007607 (0.007050) | 0.610137 / 0.226044 (0.384093) | 6.051011 / 2.268929 (3.782083) | 2.887357 / 55.444624 (-52.557267) | 2.518225 / 6.876477 (-4.358252) | 2.559654 / 2.142072 (0.417582) | 0.981226 / 4.805227 (-3.824001) | 0.197323 / 6.500664 (-6.303341) | 0.076851 / 0.075469 (0.001382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.554662 / 1.841788 (-0.287126) | 18.038993 / 8.074308 (9.964685) | 16.719948 / 10.191392 (6.528556) | 0.195641 / 0.680424 (-0.484783) | 0.020699 / 0.534201 (-0.513502) | 0.498949 / 0.579283 (-0.080334) | 0.487775 / 0.434364 (0.053411) | 0.591413 / 0.540337 (0.051075) | 0.708520 / 1.386936 (-0.678416) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#39de0d78224c070be33d0820ec9203018fb721d1 \"CML watermark\")\n", "Ready for review @mariosasko :)", "Yea it does behave as expected, but modifying a dataset's features dict is not recommended and can lead to unpredictable behaviors. By copying the features, we make sure users don't modify the dataset's features dict.\r\n\r\nSince the attribute is public, users expect to be able to do whatever they want with it, without checking if they have to copy it or not", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008069 / 0.011353 (-0.003284) | 0.005051 / 0.011008 (-0.005958) | 0.096587 / 0.038508 (0.058079) | 0.032954 / 0.023109 (0.009844) | 0.317877 / 0.275898 (0.041979) | 0.328677 / 0.323480 (0.005197) | 0.005524 / 0.007986 (-0.002462) | 0.003958 / 0.004328 (-0.000370) | 0.072692 / 0.004250 (0.068441) | 0.044554 / 0.037052 (0.007502) | 0.311121 / 0.258489 (0.052632) | 0.355912 / 0.293841 (0.062071) | 0.035934 / 0.128546 (-0.092612) | 0.012056 / 0.075646 (-0.063590) | 0.332575 / 0.419271 (-0.086696) | 0.049788 / 0.043533 (0.006255) | 0.307918 / 0.255139 (0.052779) | 0.326757 / 0.283200 (0.043557) | 0.098671 / 0.141683 (-0.043012) | 1.424625 / 1.452155 (-0.027530) | 1.507944 / 1.492716 (0.015228) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207976 / 0.018006 (0.189970) | 0.439604 / 0.000490 (0.439114) | 0.000435 / 0.000200 (0.000235) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026961 / 0.037411 (-0.010451) | 0.106627 / 0.014526 (0.092101) | 0.115292 / 0.176557 (-0.061264) | 0.171901 / 0.737135 (-0.565234) | 0.123276 / 0.296338 (-0.173062) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407679 / 0.215209 (0.192469) | 4.071958 / 2.077655 (1.994303) | 1.854270 / 1.504120 (0.350151) | 1.678406 / 1.541195 (0.137211) | 1.715890 / 1.468490 (0.247400) | 0.705536 / 4.584777 (-3.879241) | 3.774198 / 3.745712 (0.028486) | 2.096429 / 5.269862 (-3.173432) | 1.431810 / 4.565676 (-3.133866) | 0.085557 / 0.424275 (-0.338718) | 0.012191 / 0.007607 (0.004584) | 0.502937 / 0.226044 (0.276893) | 5.034391 / 2.268929 (2.765463) | 2.393826 / 55.444624 (-53.050799) | 2.037383 / 6.876477 (-4.839094) | 2.192037 / 2.142072 (0.049964) | 0.829298 / 4.805227 (-3.975929) | 0.167781 / 6.500664 (-6.332883) | 0.063405 / 0.075469 (-0.012064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.179189 / 1.841788 (-0.662599) | 14.464132 / 8.074308 (6.389824) | 14.869024 / 10.191392 (4.677632) | 0.172864 / 0.680424 (-0.507560) | 0.017817 / 0.534201 (-0.516384) | 0.427849 / 0.579283 (-0.151434) | 0.434447 / 0.434364 (0.000083) | 0.502077 / 0.540337 (-0.038260) | 0.599587 / 1.386936 (-0.787349) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007366 / 0.011353 (-0.003987) | 0.004939 / 0.011008 (-0.006069) | 0.074982 / 0.038508 (0.036474) | 0.032611 / 0.023109 (0.009501) | 0.340670 / 0.275898 (0.064772) | 0.372471 / 0.323480 (0.048991) | 0.005567 / 0.007986 (-0.002418) | 0.003956 / 0.004328 (-0.000372) | 0.074550 / 0.004250 (0.070300) | 0.047097 / 0.037052 (0.010045) | 0.337049 / 0.258489 (0.078560) | 0.391512 / 0.293841 (0.097671) | 0.035712 / 0.128546 (-0.092835) | 0.012040 / 0.075646 (-0.063606) | 0.087126 / 0.419271 (-0.332146) | 0.048290 / 0.043533 (0.004757) | 0.335069 / 0.255139 (0.079930) | 0.362080 / 0.283200 (0.078881) | 0.098606 / 0.141683 (-0.043077) | 1.456802 / 1.452155 (0.004647) | 1.554652 / 1.492716 (0.061936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200015 / 0.018006 (0.182009) | 0.442772 / 0.000490 (0.442283) | 0.004594 / 0.000200 (0.004394) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028510 / 0.037411 (-0.008901) | 0.109654 / 0.014526 (0.095128) | 0.119921 / 0.176557 (-0.056636) | 0.170289 / 0.737135 (-0.566846) | 0.125288 / 0.296338 (-0.171051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430919 / 0.215209 (0.215710) | 4.274132 / 2.077655 (2.196478) | 2.014385 / 1.504120 (0.510265) | 1.822094 / 1.541195 (0.280899) | 1.938188 / 1.468490 (0.469698) | 0.707812 / 4.584777 (-3.876965) | 3.925730 / 3.745712 (0.180018) | 2.117481 / 5.269862 (-3.152381) | 1.369521 / 4.565676 (-3.196155) | 0.088414 / 0.424275 (-0.335861) | 0.013101 / 0.007607 (0.005494) | 0.538468 / 0.226044 (0.312424) | 5.384614 / 2.268929 (3.115685) | 2.487709 / 55.444624 (-52.956915) | 2.152060 / 6.876477 (-4.724417) | 2.225777 / 2.142072 (0.083705) | 0.856749 / 4.805227 (-3.948479) | 0.173299 / 6.500664 (-6.327366) | 0.068872 / 0.075469 (-0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268009 / 1.841788 (-0.573778) | 15.102648 / 8.074308 (7.028340) | 14.216810 / 10.191392 (4.025418) | 0.163661 / 0.680424 (-0.516763) | 0.017394 / 0.534201 (-0.516807) | 0.418030 / 0.579283 (-0.161253) | 0.413717 / 0.434364 (-0.020647) | 0.487526 / 0.540337 (-0.052811) | 0.581499 / 1.386936 (-0.805437) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#46bb11e96d053c497035a2436702860de9960a65 \"CML watermark\")\n" ]
1,679,332,643,000
1,679,577,559,000
1,679,577,128,000
MEMBER
null
Some users (even internally at HF) are doing ```python dset_features = dset.features dset_features.pop(col_to_remove) dset = dset.map(..., features=dset_features) ``` Right now this causes issues because it modifies the features dict in place before the map. In this PR I modified `dset.features` to return a copy of the features, so that users can modify it if they want.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5652/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5652", "html_url": "https://github.com/huggingface/datasets/pull/5652", "diff_url": "https://github.com/huggingface/datasets/pull/5652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5652.patch", "merged_at": "2023-03-23T13:12:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/5651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5651/comments
https://api.github.com/repos/huggingface/datasets/issues/5651/events
https://github.com/huggingface/datasets/issues/5651
1,631,967,509
I_kwDODunzps5hRdkV
5,651
expanduser in save_to_disk
{ "login": "RmZeta2718", "id": 42400165, "node_id": "MDQ6VXNlcjQyNDAwMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RmZeta2718", "html_url": "https://github.com/RmZeta2718", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
{ "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false }
[ { "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false } ]
[ "`save_to_disk` should indeed expand `~`. Marking it as a \"good first issue\".", "#self-assign\r\n\r\nFile path to code: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_dataset.py#L1364", "Hello, \r\nIt says `save_to_disk` is deprecated in 2.8.0, so the alternative to this will be `storage_options`? \r\n\r\nhttps://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.save_to_disk" ]
1,679,313,738,000
1,687,148,049,000
null
NONE
null
### Describe the bug save_to_disk() does not expand `~` 1. `dataset = load_datasets("any dataset")` 2. `dataset.save_to_disk("~/data")` 3. a folder named "~" created in current folder 4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`) related issue https://github.com/huggingface/transformers/issues/10628 ### Steps to reproduce the bug As described above. ### Expected behavior expanduser correctly ### Environment info - datasets 2.10.1 - python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5651/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5650/comments
https://api.github.com/repos/huggingface/datasets/issues/5650/events
https://github.com/huggingface/datasets/issues/5650
1,630,336,919
I_kwDODunzps5hLPeX
5,650
load_dataset can't work correct with my image data
{ "login": "WiNE-iNEFF", "id": 41611046, "node_id": "MDQ6VXNlcjQxNjExMDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WiNE-iNEFF", "html_url": "https://github.com/WiNE-iNEFF", "followers_url": "https://api.github.com/users/WiNE-iNEFF/followers", "following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}", "gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}", "starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions", "organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs", "repos_url": "https://api.github.com/users/WiNE-iNEFF/repos", "events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}", "received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Can you post a reproducible code snippet of what you tried to do?\r\n\r\n", "> Can you post a reproducible code snippet of what you tried to do?\n> \n> \n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\n```", "hi @WiNE-iNEFF ! can you please also tell a bit more about how your data is structured (directory structure and filenames patterns)?", "> hi @WiNE-iNEFF ! can you please also tell a bit more about how your data is structured (directory structure and filenames patterns)?\n\nAll file have format .png converted in RGBA. \nIn main folder \"MyData\" contain 4 folder with images. In function load_dataset i use folder \"MyData\"", "@WiNE-iNEFF I'm sorry there is still not enough information to answer your question :( For now I can only assume that your [filenames contain split names](https://huggingface.co/docs/datasets/repository_structure#splits-and-file-names) which are somehow incorrectly parsed. \r\nWhat would be the output if you omit `split` while loading? Like just\r\n```python\r\nds = load_dataset(\"MyData\")\r\nprint(ds)\r\n```\r\n\r\n", "> @WiNE-iNEFF I'm sorry there is still not enough information to answer your question :( For now I can only assume that your [filenames contain split names](https://huggingface.co/docs/datasets/repository_structure#splits-and-file-names) which are somehow incorrectly parsed. \n> What would be the output if you omit `split` while loading? Like just\n> ```python\n> ds = load_dataset(\"MyData\")\n> print(ds)\n> ```\n> \n> \n\n```python\nDataset({\n features: ['image', 'label'],\n num_rows: 4\n})\n```", "@WiNE-iNEFF My only guess is that 4 images in your data have `\"train\"` string in their names (something like `\"train_image_0.png\"`) and others do not and the loader ignores all the files that do not contain split name in filename. If it's true, please try to remove \"train\" from filenames. Or maybe they are inside a directory named \"train\", then the directory should be renamed (unless you want to put only these 4 specific images to the train but apparently you do not).\r\n\r\nIf there is a bug I cannot investigate it unfortunately because I cannot reproduce your case without some data samples. ", "> @WiNE-iNEFF My only guess is that 4 images in your data have `\"train\"` string in their names (something like `\"train_image_0.png\"`) and others do not and the loader ignores all the files that do not contain split name in filename. If it's true, please try to remove \"train\" from filenames. Or maybe they are inside a directory named \"train\", then the directory should be renamed (unless you want to put only these 4 specific images to the train but apparently you do not).\n> \n> If there is a bug I cannot investigate it unfortunately because I cannot reproduce your case without some data samples. \n\nI checked my files and some of them do have the words train, valid and test in their names, but the number of such images is more than 500, not 4.", "@WiNE-iNEFF Probably they are named inconsistently so that the correct pattern for which files should correspond to which split cannot be inferred. You can make it clearer to the loader by removing split names from filenames and putting files in separate folder for each split (you can take a look at the [documentation for imagefolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder)):\r\n```\r\n Fuaimeanna2/\r\nβ”œβ”€ test\r\nβ”‚Β Β  β”œβ”€ label_0\r\nβ”‚Β Β  β”‚Β Β  β”œβ”€β”€ filename_0.jpg\r\nβ”‚Β Β  β”‚Β Β  └── filename_1.jpg\r\nβ”‚Β Β  β”‚Β Β  └── ...\r\nβ”‚Β Β  β”œβ”€ label_1\r\nβ”‚Β Β  β”‚Β Β  └── ...\r\nβ”‚Β Β  β”œβ”€ label_2\r\nβ”‚Β Β  β”‚Β Β  └── ...\r\nβ”‚Β Β  └─ label_3\r\nβ”‚Β Β  └── ...\r\nβ”œβ”€ train\r\nβ”‚Β Β  β”œβ”€ label_0\r\nβ”‚Β Β  β”‚Β Β  └── ...\r\nβ”‚Β Β  β”œβ”€ label_1\r\nβ”‚Β Β  β”‚Β Β  └── ...\r\nβ”‚Β Β  β”œβ”€ label_2\r\nβ”‚Β Β  β”‚Β Β  └── ...\r\nβ”‚Β Β  └─ label_3\r\nβ”‚Β Β  └── ...\r\n└── validation\r\n Β Β  β”œβ”€ label_0\r\nΒ Β  β”‚Β Β  └── ...\r\n Β Β  β”œβ”€ label_1\r\nΒ Β  β”‚Β Β  └── ...\r\n Β Β  β”œβ”€ label_2\r\nΒ Β  β”‚Β Β  └── ...\r\n └─ label_3\r\n └── ...\r\n```", "> @WiNE-iNEFF Probably they are named inconsistently so that the correct pattern for which files should correspond to which split cannot be inferred. You can make it clearer to the loader by removing split names from filenames and putting files in separate folder for each split (you can take a look at the [documentation for imagefolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder)):\n> ```\n> Fuaimeanna2/\n> β”œβ”€ test\n> β”‚Β Β  β”œβ”€ label_0\n> β”‚Β Β  β”‚Β Β  β”œβ”€β”€ filename_0.jpg\n> β”‚Β Β  β”‚Β Β  └── filename_1.jpg\n> β”‚Β Β  β”‚Β Β  └── ...\n> β”‚Β Β  β”œβ”€ label_1\n> β”‚Β Β  β”‚Β Β  └── ...\n> β”‚Β Β  β”œβ”€ label_2\n> β”‚Β Β  β”‚Β Β  └── ...\n> β”‚Β Β  └─ label_3\n> β”‚Β Β  └── ...\n> β”œβ”€ train\n> β”‚Β Β  β”œβ”€ label_0\n> β”‚Β Β  β”‚Β Β  └── ...\n> β”‚Β Β  β”œβ”€ label_1\n> β”‚Β Β  β”‚Β Β  └── ...\n> β”‚Β Β  β”œβ”€ label_2\n> β”‚Β Β  β”‚Β Β  └── ...\n> β”‚Β Β  └─ label_3\n> β”‚Β Β  └── ...\n> └── validation\n> Β Β  β”œβ”€ label_0\n> Β Β  β”‚Β Β  └── ...\n> Β Β  β”œβ”€ label_1\n> Β Β  β”‚Β Β  └── ...\n> Β Β  β”œβ”€ label_2\n> Β Β  β”‚Β Β  └── ...\n> └─ label_3\n> └── ...\n> ```\n\nI have read this documentation more than once. It just wasn't a problem before.", "Hi,\r\n\r\nYou need to use:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\", split=\"train\", data_dir=\"path_to_your_folder\")\r\n```\r\ninstead of \r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\r\n```\r\nTo create an image dataset from your local folders.", "> Hi,\r\n> \r\n> You need to use:\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"imagefolder\", split=\"train\", data_dir=\"path_to_your_folder\")\r\n> ```\r\n> \r\n> instead of\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"my_folder_name\", split=\"train\")\r\n> ```\r\n> \r\n> To create an image dataset from your local folders.\r\n\r\nThank you, but even using the method that you wrote above absolutely nothing changes, especially without using data_dir on my other data everything works fine", "@WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \r\n\r\n\r\n> even using the method that you wrote above absolutely nothing changes\r\n\r\nfyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.", "Oh I didn't know that! OK but in any case, not sure why the image builder isn't working for @WiNE-iNEFF. But it's hard for us to help if we can't reproduce. I'd just check the structure of the folders, see if the splits are correctly set up, etc.", "> @WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \n> \n> \n> > even using the method that you wrote above absolutely nothing changes\n> \n> fyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.\n\nI'll try to try your method over the next few days, then I'll write it turned out ", "> @WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \n> \n> \n> > even using the method that you wrote above absolutely nothing changes\n> \n> fyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.\n\nI tried creating a `train` folder and put my image folders in it. As a result, all 18,000 images were loaded. ", "@WiNE-iNEFF great! So to explain what happened according to my assumptions:\r\n\r\nWhen you use a standard packaged loader (like `imagefolder`, `csv`, `jsonl`, and so on) and load your data like `load_dataset(\"my_folder_name\")` or `load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`, the library searches for patterns to divide files into splits. This is described a bit in [this doc](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#splits-and-file-names). And the order to search for patterns is the following:\r\n1. first it checks for [pattern like `data/<split_name>-xxxxx-of-xxxxx`](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#custom-split-names) (which allows to pass custom split names)\r\n2. then for directories named as splits (if you have directories named `train`, `test` etc.)\r\n3. then for [splits in filenames](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#splits-and-file-names) (like if you have files named `train-image.jpg`, `test_0.jpg`, ...)\r\n4. then if no pattern was found, it treats all files as belonging to a single `train` split\r\n\r\nThe code is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L215).\r\nSo I assume that in your case, since you didn't have directories for splits (pattern 2), some files that included split keywords (pattern 3) were included and others were ignored as not matching the pattern. And when you added `train` directory, the pattern for directories (pattern 2) was triggered first and everything worked as expected. Everything worked in your previous cases probably because you didn't have split names keywords in filenames, so all the files ended up being a part of a single train split (pattern 4).\r\n\r\nAnother way to mitigate this apart from structuring your data according to the patterns is to explicitly state with files belong to which splits by passing them with `data_files` parameter:\r\n```python\r\nload_dataset(\"my_folder_name\", data_files={\"train\": \"**\"}) # to tell that all files should be included \r\n```\r\n\r\nNow I see that this order should be explained in documentation and also referenced in sections for packaged modules like `imagefolder`, thank you for pointing this out. \r\n\r\n \r\n", "@NielsRogge @polinaeterna I have a similar problem when reading my dataset. I want to use DETR for object detection, but my data is in YOLO format. With a dataset of 10k images, yolo format involves having 10k labels. As far as I read regarding [COCO format](https://auto.gluon.ai/stable/tutorials/multimodal/object_detection/data_preparation/convert_data_to_coco_format.html), there must be one JSON per split. However, as I post in the [Hugging Face forum](https://discuss.huggingface.co/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894), when it is read, the number of rows is 1, which does not make sense. \r\nThe instruction to read the train-val-test splits are: \r\n```python\r\nfrom datasets import load_dataset\r\ndata_files = {\r\n\t\"train\": './train_labels.json',\r\n\t\"validation\": './val_labels.json',\r\n\t\"test\": './test_labels.json'\r\n}\r\ndataset = load_dataset(\"json\", data_files=data_files)\r\n```\r\nAn example of the short version of the json file I read, to reproduce my error, is the following: \r\n\r\n``` json\r\n{\r\n \"info\": {},\r\n \"licenses\": [],\r\n \"images\": [\r\n {\r\n \"id\": 1,\r\n \"file_name\": \"aceca_100.mp4frame21.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null\r\n },\r\n {\r\n \"id\": 2,\r\n \"file_name\": \"aceca_100.mp4frame24.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null\r\n },\r\n {\r\n \"id\": 3,\r\n \"file_name\": \"aceca_100.mp4frame25.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null}],\r\n \"annotations\": [\r\n {\r\n \"id\": 1,\r\n \"image_id\": 1,\r\n \"category_id\": 0,\r\n \"bbox\": [0.0, 278.21896388398557, 86.94096523844935, 156.0293445072134],\r\n \"area\": 13565.341816979679,\r\n \"iscrowd\": 0\r\n },\r\n {\r\n \"id\": 2,\r\n \"image_id\": 2,\r\n \"category_id\": 0,\r\n \"bbox\": [149.28851295721816, 297.6359759754418, 34.76802347007475, 98.03908698442889],\r\n \"area\": 3408.625277259324,\r\n \"iscrowd\": 0\r\n },\r\n {\r\n \"id\": 3,\r\n \"image_id\": 3,\r\n \"category_id\": 0,\r\n \"bbox\": [153.3817197549372, 300.168969412891, 31.787555842913775, 89.69583163436312],\r\n \"area\": 2851.2112569539095,\r\n \"iscrowd\": 0\r\n }\r\n ],\r\n \"categories\": [\r\n {\r\n \"id\": 0, \"name\": \"person\"\r\n }\r\n ]\r\n }\r\n```\r\nIf full files required, my email is [email protected]", "Hi @Alberto1404, to load an object detection dataset it's recommended to make use of the metadata feature as explained [here](https://huggingface.co/docs/datasets/image_dataset#object-detection). ", "Thank you @NielsRogge! It works!!!" ]
1,679,147,953,000
1,680,530,584,000
null
NONE
null
I have about 20000 images in my folder which divided into 4 folders with class names. When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting images and the like but absolutely nothing worked
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5650/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
https://api.github.com/repos/huggingface/datasets/issues/5649/events
https://github.com/huggingface/datasets/issues/5649
1,630,173,460
I_kwDODunzps5hKnkU
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
{ "login": "lsb", "id": 45281, "node_id": "MDQ6VXNlcjQ1Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lsb", "html_url": "https://github.com/lsb", "followers_url": "https://api.github.com/users/lsb/followers", "following_url": "https://api.github.com/users/lsb/following{/other_user}", "gists_url": "https://api.github.com/users/lsb/gists{/gist_id}", "starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsb/subscriptions", "organizations_url": "https://api.github.com/users/lsb/orgs", "repos_url": "https://api.github.com/users/lsb/repos", "events_url": "https://api.github.com/users/lsb/events{/privacy}", "received_events_url": "https://api.github.com/users/lsb/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ", "I think this is low enough priority for me to close this as Won't Fix. If I need any primary keys I can generate them beforehand. Feel free to reopen." ]
1,679,117,117,000
1,686,985,317,000
1,686,985,317,000
NONE
null
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the bug ``` from datasets import Dataset import sqlite3 db = sqlite3.connect(":memory:") nice_numbers = Dataset.from_dict({"nice_number": range(101,106)}) nice_numbers.to_sql("nice1", db, batch_size=1) nice_numbers.to_sql("nice2", db, batch_size=2) print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)] print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)] ``` ### Expected behavior I expected the "index" column to be unique ### Environment info ``` % datasets-cli env Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.2 zsh: segmentation fault datasets-cli env ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5648/comments
https://api.github.com/repos/huggingface/datasets/issues/5648/events
https://github.com/huggingface/datasets/issues/5648
1,629,253,719
I_kwDODunzps5hHHBX
5,648
flatten_indices doesn't work with pandas format
{ "login": "alialamiidrissi", "id": 14365168, "node_id": "MDQ6VXNlcjE0MzY1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alialamiidrissi", "html_url": "https://github.com/alialamiidrissi", "followers_url": "https://api.github.com/users/alialamiidrissi/followers", "following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}", "gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}", "starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions", "organizations_url": "https://api.github.com/users/alialamiidrissi/orgs", "repos_url": "https://api.github.com/users/alialamiidrissi/repos", "events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}", "received_events_url": "https://api.github.com/users/alialamiidrissi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indices mapping, so I'll incorporate the fix in that PR." ]
1,679,057,065,000
1,679,404,323,000
null
NONE
null
### Describe the bug Hi, I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output ### Steps to reproduce the bug tabular_data = pd.DataFrame(np.random.randn(10,10)) tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data) tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices() ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.10.1 - Python version: 3.9.5 - PyArrow version: 11.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5648/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5647/comments
https://api.github.com/repos/huggingface/datasets/issues/5647/events
https://github.com/huggingface/datasets/issues/5647
1,628,225,544
I_kwDODunzps5hDMAI
5,647
Make all print statements optional
{ "login": "gagan3012", "id": 49101362, "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gagan3012", "html_url": "https://github.com/gagan3012", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "repos_url": "https://api.github.com/users/gagan3012/repos", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "related to #5444 " ]
1,678,998,607,000
1,679,589,425,000
null
NONE
null
### Feature request Make all print statements optional to speed up the development ### Motivation Im loading multiple tiny datasets and all the print statements make the loading slower ### Your contribution I can help contribute
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5647/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5646/comments
https://api.github.com/repos/huggingface/datasets/issues/5646/events
https://github.com/huggingface/datasets/pull/5646
1,627,838,762
PR_kwDODunzps5MOqjj
5,646
Allow self as key in `Features`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009980 / 0.011353 (-0.001373) | 0.006643 / 0.011008 (-0.004366) | 0.140722 / 0.038508 (0.102214) | 0.036693 / 0.023109 (0.013584) | 0.430019 / 0.275898 (0.154121) | 0.463218 / 0.323480 (0.139738) | 0.006977 / 0.007986 (-0.001008) | 0.006488 / 0.004328 (0.002160) | 0.099385 / 0.004250 (0.095134) | 0.047160 / 0.037052 (0.010108) | 0.431440 / 0.258489 (0.172951) | 0.500232 / 0.293841 (0.206391) | 0.057968 / 0.128546 (-0.070578) | 0.020197 / 0.075646 (-0.055449) | 0.438269 / 0.419271 (0.018998) | 0.071149 / 0.043533 (0.027617) | 0.428502 / 0.255139 (0.173363) | 0.486861 / 0.283200 (0.203661) | 0.119855 / 0.141683 (-0.021828) | 1.875372 / 1.452155 (0.423218) | 1.955055 / 1.492716 (0.462339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243468 / 0.018006 (0.225462) | 0.547842 / 0.000490 (0.547352) | 0.004885 / 0.000200 (0.004685) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031555 / 0.037411 (-0.005856) | 0.125869 / 0.014526 (0.111343) | 0.137816 / 0.176557 (-0.038741) | 0.206581 / 0.737135 (-0.530555) | 0.142976 / 0.296338 (-0.153362) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.624773 / 0.215209 (0.409564) | 6.154861 / 2.077655 (4.077206) | 2.504586 / 1.504120 (1.000466) | 1.989118 / 1.541195 (0.447923) | 2.092280 / 1.468490 (0.623790) | 1.240108 / 4.584777 (-3.344669) | 5.584893 / 3.745712 (1.839181) | 3.075369 / 5.269862 (-2.194492) | 2.174285 / 4.565676 (-2.391391) | 0.141555 / 0.424275 (-0.282720) | 0.016099 / 0.007607 (0.008492) | 0.720543 / 0.226044 (0.494498) | 7.489000 / 2.268929 (5.220071) | 3.239189 / 55.444624 (-52.205435) | 2.525772 / 6.876477 (-4.350704) | 2.773514 / 2.142072 (0.631441) | 1.410084 / 4.805227 (-3.395143) | 0.259252 / 6.500664 (-6.241412) | 0.082573 / 0.075469 (0.007104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.458186 / 1.841788 (-0.383602) | 17.503738 / 8.074308 (9.429430) | 20.817682 / 10.191392 (10.626290) | 0.231221 / 0.680424 (-0.449203) | 0.032550 / 0.534201 (-0.501651) | 0.559020 / 0.579283 (-0.020263) | 0.592987 / 0.434364 (0.158623) | 0.602661 / 0.540337 (0.062324) | 0.731912 / 1.386936 (-0.655024) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009543 / 0.011353 (-0.001810) | 0.006953 / 0.011008 (-0.004055) | 0.087651 / 0.038508 (0.049143) | 0.031717 / 0.023109 (0.008608) | 0.437813 / 0.275898 (0.161915) | 0.468448 / 0.323480 (0.144968) | 0.007378 / 0.007986 (-0.000607) | 0.005170 / 0.004328 (0.000842) | 0.102286 / 0.004250 (0.098035) | 0.043643 / 0.037052 (0.006591) | 0.458788 / 0.258489 (0.200299) | 0.519891 / 0.293841 (0.226050) | 0.052875 / 0.128546 (-0.075671) | 0.020518 / 0.075646 (-0.055128) | 0.112675 / 0.419271 (-0.306597) | 0.066390 / 0.043533 (0.022858) | 0.423037 / 0.255139 (0.167898) | 0.420345 / 0.283200 (0.137146) | 0.119221 / 0.141683 (-0.022462) | 1.632244 / 1.452155 (0.180090) | 1.829585 / 1.492716 (0.336869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242312 / 0.018006 (0.224305) | 0.547592 / 0.000490 (0.547102) | 0.006520 / 0.000200 (0.006320) | 0.000185 / 0.000054 (0.000131) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032204 / 0.037411 (-0.005207) | 0.113320 / 0.014526 (0.098794) | 0.135667 / 0.176557 (-0.040889) | 0.194360 / 0.737135 (-0.542775) | 0.127934 / 0.296338 (-0.168404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648134 / 0.215209 (0.432925) | 6.470574 / 2.077655 (4.392920) | 2.799121 / 1.504120 (1.295001) | 2.160450 / 1.541195 (0.619255) | 2.261648 / 1.468490 (0.793158) | 1.244660 / 4.584777 (-3.340117) | 5.694636 / 3.745712 (1.948923) | 5.316191 / 5.269862 (0.046329) | 2.764551 / 4.565676 (-1.801126) | 0.152225 / 0.424275 (-0.272051) | 0.015959 / 0.007607 (0.008351) | 0.833606 / 0.226044 (0.607562) | 8.099765 / 2.268929 (5.830836) | 3.523005 / 55.444624 (-51.921620) | 2.855126 / 6.876477 (-4.021351) | 2.730849 / 2.142072 (0.588776) | 1.434351 / 4.805227 (-3.370876) | 0.251963 / 6.500664 (-6.248701) | 0.085718 / 0.075469 (0.010249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.722466 / 1.841788 (-0.119322) | 17.846981 / 8.074308 (9.772673) | 21.578684 / 10.191392 (11.387292) | 0.239987 / 0.680424 (-0.440437) | 0.029189 / 0.534201 (-0.505012) | 0.543181 / 0.579283 (-0.036102) | 0.626527 / 0.434364 (0.192163) | 0.614334 / 0.540337 (0.073997) | 0.745934 / 1.386936 (-0.641002) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4c506ad7cd22668f37ec51ff01b7c7f7235b9212 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007395 / 0.011353 (-0.003958) | 0.004965 / 0.011008 (-0.006043) | 0.096376 / 0.038508 (0.057868) | 0.033243 / 0.023109 (0.010134) | 0.299990 / 0.275898 (0.024092) | 0.336287 / 0.323480 (0.012807) | 0.005528 / 0.007986 (-0.002458) | 0.004003 / 0.004328 (-0.000326) | 0.072820 / 0.004250 (0.068569) | 0.042867 / 0.037052 (0.005815) | 0.296719 / 0.258489 (0.038230) | 0.337313 / 0.293841 (0.043472) | 0.036809 / 0.128546 (-0.091738) | 0.012239 / 0.075646 (-0.063407) | 0.332351 / 0.419271 (-0.086921) | 0.050449 / 0.043533 (0.006916) | 0.301483 / 0.255139 (0.046344) | 0.316673 / 0.283200 (0.033474) | 0.102526 / 0.141683 (-0.039157) | 1.415429 / 1.452155 (-0.036726) | 1.544381 / 1.492716 (0.051665) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211158 / 0.018006 (0.193152) | 0.434718 / 0.000490 (0.434228) | 0.003386 / 0.000200 (0.003186) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027945 / 0.037411 (-0.009466) | 0.108743 / 0.014526 (0.094217) | 0.119771 / 0.176557 (-0.056785) | 0.178667 / 0.737135 (-0.558468) | 0.123718 / 0.296338 (-0.172620) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413908 / 0.215209 (0.198699) | 4.136828 / 2.077655 (2.059174) | 1.932547 / 1.504120 (0.428427) | 1.715389 / 1.541195 (0.174194) | 1.791679 / 1.468490 (0.323189) | 0.692715 / 4.584777 (-3.892062) | 3.741807 / 3.745712 (-0.003905) | 2.066274 / 5.269862 (-3.203587) | 1.314106 / 4.565676 (-3.251570) | 0.087191 / 0.424275 (-0.337084) | 0.012866 / 0.007607 (0.005259) | 0.510012 / 0.226044 (0.283968) | 5.116419 / 2.268929 (2.847490) | 2.408562 / 55.444624 (-53.036063) | 2.002044 / 6.876477 (-4.874433) | 2.121868 / 2.142072 (-0.020204) | 0.837141 / 4.805227 (-3.968086) | 0.166596 / 6.500664 (-6.334068) | 0.063190 / 0.075469 (-0.012279) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204152 / 1.841788 (-0.637636) | 14.739793 / 8.074308 (6.665485) | 14.403469 / 10.191392 (4.212077) | 0.165781 / 0.680424 (-0.514642) | 0.017826 / 0.534201 (-0.516375) | 0.423527 / 0.579283 (-0.155756) | 0.431410 / 0.434364 (-0.002954) | 0.499422 / 0.540337 (-0.040915) | 0.596116 / 1.386936 (-0.790820) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007365 / 0.011353 (-0.003988) | 0.005165 / 0.011008 (-0.005844) | 0.073403 / 0.038508 (0.034895) | 0.032542 / 0.023109 (0.009433) | 0.339304 / 0.275898 (0.063406) | 0.371892 / 0.323480 (0.048412) | 0.005544 / 0.007986 (-0.002442) | 0.004108 / 0.004328 (-0.000221) | 0.073750 / 0.004250 (0.069500) | 0.045613 / 0.037052 (0.008561) | 0.366159 / 0.258489 (0.107670) | 0.389864 / 0.293841 (0.096023) | 0.036006 / 0.128546 (-0.092540) | 0.012402 / 0.075646 (-0.063244) | 0.085137 / 0.419271 (-0.334135) | 0.048485 / 0.043533 (0.004952) | 0.334172 / 0.255139 (0.079033) | 0.353168 / 0.283200 (0.069969) | 0.099393 / 0.141683 (-0.042290) | 1.460584 / 1.452155 (0.008429) | 1.518601 / 1.492716 (0.025885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227352 / 0.018006 (0.209346) | 0.444211 / 0.000490 (0.443721) | 0.000410 / 0.000200 (0.000210) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029517 / 0.037411 (-0.007894) | 0.115557 / 0.014526 (0.101031) | 0.125855 / 0.176557 (-0.050701) | 0.175214 / 0.737135 (-0.561922) | 0.129324 / 0.296338 (-0.167014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429783 / 0.215209 (0.214574) | 4.301159 / 2.077655 (2.223504) | 2.084939 / 1.504120 (0.580819) | 1.887781 / 1.541195 (0.346586) | 2.045712 / 1.468490 (0.577222) | 0.693319 / 4.584777 (-3.891458) | 3.788595 / 3.745712 (0.042883) | 2.087080 / 5.269862 (-3.182781) | 1.325247 / 4.565676 (-3.240429) | 0.085919 / 0.424275 (-0.338356) | 0.012710 / 0.007607 (0.005103) | 0.533432 / 0.226044 (0.307387) | 5.339468 / 2.268929 (3.070540) | 2.578351 / 55.444624 (-52.866273) | 2.224905 / 6.876477 (-4.651572) | 2.301064 / 2.142072 (0.158992) | 0.839622 / 4.805227 (-3.965605) | 0.166523 / 6.500664 (-6.334141) | 0.065254 / 0.075469 (-0.010215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262223 / 1.841788 (-0.579565) | 15.042523 / 8.074308 (6.968215) | 14.542719 / 10.191392 (4.351327) | 0.142230 / 0.680424 (-0.538194) | 0.017610 / 0.534201 (-0.516591) | 0.422357 / 0.579283 (-0.156926) | 0.417785 / 0.434364 (-0.016579) | 0.491990 / 0.540337 (-0.048348) | 0.585835 / 1.386936 (-0.801101) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c2fcedd2a561fe6f5b6972ad18bfef722e1d2c77 \"CML watermark\")\n" ]
1,678,983,423,000
1,678,987,318,000
1,678,986,890,000
CONTRIBUTOR
null
Fix #5641
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5646/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5646", "html_url": "https://github.com/huggingface/datasets/pull/5646", "diff_url": "https://github.com/huggingface/datasets/pull/5646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5646.patch", "merged_at": "2023-03-16T17:14:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/5645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5645/comments
https://api.github.com/repos/huggingface/datasets/issues/5645/events
https://github.com/huggingface/datasets/issues/5645
1,627,108,278
I_kwDODunzps5g-7O2
5,645
Datasets map and select(range()) is giving dill error
{ "login": "Tanya-11", "id": 90728105, "node_id": "MDQ6VXNlcjkwNzI4MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tanya-11", "html_url": "https://github.com/Tanya-11", "followers_url": "https://api.github.com/users/Tanya-11/followers", "following_url": "https://api.github.com/users/Tanya-11/following{/other_user}", "gists_url": "https://api.github.com/users/Tanya-11/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tanya-11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanya-11/subscriptions", "organizations_url": "https://api.github.com/users/Tanya-11/orgs", "repos_url": "https://api.github.com/users/Tanya-11/repos", "events_url": "https://api.github.com/users/Tanya-11/events{/privacy}", "received_events_url": "https://api.github.com/users/Tanya-11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-beam` ?", "@lhoestq That fixed the problem, Thanks :)" ]
1,678,960,888,000
1,679,027,091,000
1,679,027,091,000
NONE
null
### Describe the bug I'm using Huggingface Datasets library to load the dataset in google colab When I do, > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) I get following error: `module 'dill._dill' has no attribute 'log'` I've tried downgrading the dill version from latest to 0.2.8, but no luck. Stack trace: > --------------------------------------------------------------------------- > ModuleNotFoundError Traceback (most recent call last) > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj) > 367 try: > --> 368 import transformers as tr > 369 > > ModuleNotFoundError: No module named 'transformers' > > During handling of the above exception, another exception occurred: > > AttributeError Traceback (most recent call last) > 17 frames > <ipython-input-13-dd14813880a6> in <module> > ----> 1 test = train_dataset.select(range(10)) > > /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) > 155 } > 156 # apply actual function > --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] > 159 # re-apply format to the output > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) > 155 if kwargs.get(fingerprint_name) is None: > 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name > --> 157 kwargs[fingerprint_name] = update_fingerprint( > 158 self._fingerprint, transform, kwargs_for_fingerprint > 159 ) > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) > 103 for key in sorted(transform_args): > 104 hasher.update(key) > --> 105 hasher.update(transform_args[key]) > 106 return hasher.hexdigest() > 107 > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value) > 55 def update(self, value): > 56 self.m.update(f"=={type(value)}==".encode("utf8")) > ---> 57 self.m.update(self.hash(value).encode("utf-8")) > 58 > 59 def hexdigest(self): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value) > 51 return cls.dispatch[type(value)](cls, value) > 52 else: > ---> 53 return cls.hash_default(value) > 54 > 55 def update(self, value): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value) > 44 @classmethod > 45 def hash_default(cls, value): > ---> 46 return cls.hash_bytes(dumps(value)) > 47 > 48 @classmethod > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj) > 387 file = StringIO() > 388 with _no_cache_fields(obj): > --> 389 dump(obj, file) > 390 return file.getvalue() > 391 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file) > 359 def dump(obj, file): > 360 """pickle an object to a file""" > --> 361 Pickler(file, recurse=True).dump(obj) > 362 return > 363 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj) > 392 return > 393 > --> 394 def load_session(filename='/tmp/session.pkl', main=None): > 395 """update the __main__ module with the state from the session file""" > 396 if main is None: main = _main_module > > /usr/lib/python3.9/pickle.py in dump(self, obj) > 485 if self.proto >= 4: > 486 self.framer.start_framing() > --> 487 self.save(obj) > 488 self.write(STOP) > 489 self.framer.end_framing() > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj) > > /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) > 689 write(NEWOBJ) > 690 else: > --> 691 save(func) > 692 save(args) > 693 write(REDUCE) > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj) > 583 dill._dill.log.info("# F1") > 584 else: > --> 585 dill._dill.log.info("F2: %s" % obj) > 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None)) > 587 dill._dill.StockPickler.save_global(pickler, obj, name=name) > > AttributeError: module 'dill._dill' has no attribute 'log' ### Steps to reproduce the bug After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab do either > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) ### Expected behavior The map and select function should work ### Environment info dataset: https://huggingface.co/datasets/scientific_papers dill = 0.3.6 python= 3.9.16 transformer = 4.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5645/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5644/comments
https://api.github.com/repos/huggingface/datasets/issues/5644/events
https://github.com/huggingface/datasets/pull/5644
1,626,204,046
PR_kwDODunzps5MJHUi
5,644
Allow direct cast from binary to Audio/Image
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008337 / 0.011353 (-0.003016) | 0.005588 / 0.011008 (-0.005421) | 0.110259 / 0.038508 (0.071751) | 0.038928 / 0.023109 (0.015819) | 0.350441 / 0.275898 (0.074543) | 0.378473 / 0.323480 (0.054993) | 0.006369 / 0.007986 (-0.001616) | 0.005730 / 0.004328 (0.001401) | 0.083042 / 0.004250 (0.078792) | 0.048686 / 0.037052 (0.011634) | 0.367561 / 0.258489 (0.109072) | 0.398073 / 0.293841 (0.104232) | 0.043247 / 0.128546 (-0.085299) | 0.013862 / 0.075646 (-0.061785) | 0.386745 / 0.419271 (-0.032527) | 0.060107 / 0.043533 (0.016574) | 0.345450 / 0.255139 (0.090311) | 0.371269 / 0.283200 (0.088069) | 0.117508 / 0.141683 (-0.024175) | 1.689345 / 1.452155 (0.237191) | 1.777119 / 1.492716 (0.284402) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248248 / 0.018006 (0.230242) | 0.505200 / 0.000490 (0.504710) | 0.015354 / 0.000200 (0.015155) | 0.000794 / 0.000054 (0.000740) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030179 / 0.037411 (-0.007232) | 0.118583 / 0.014526 (0.104057) | 0.131546 / 0.176557 (-0.045010) | 0.196173 / 0.737135 (-0.540962) | 0.140532 / 0.296338 (-0.155807) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470733 / 0.215209 (0.255524) | 4.758868 / 2.077655 (2.681213) | 2.246731 / 1.504120 (0.742611) | 1.995232 / 1.541195 (0.454037) | 2.057596 / 1.468490 (0.589106) | 0.819227 / 4.584777 (-3.765550) | 4.472093 / 3.745712 (0.726381) | 2.428154 / 5.269862 (-2.841708) | 1.748023 / 4.565676 (-2.817654) | 0.101965 / 0.424275 (-0.322310) | 0.014706 / 0.007607 (0.007098) | 0.600593 / 0.226044 (0.374548) | 5.869565 / 2.268929 (3.600637) | 2.764890 / 55.444624 (-52.679735) | 2.332112 / 6.876477 (-4.544364) | 2.486190 / 2.142072 (0.344118) | 0.979123 / 4.805227 (-3.826104) | 0.199543 / 6.500664 (-6.301121) | 0.075906 / 0.075469 (0.000436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397694 / 1.841788 (-0.444094) | 16.910500 / 8.074308 (8.836192) | 16.174131 / 10.191392 (5.982739) | 0.173975 / 0.680424 (-0.506449) | 0.021403 / 0.534201 (-0.512798) | 0.496187 / 0.579283 (-0.083096) | 0.487369 / 0.434364 (0.053005) | 0.565924 / 0.540337 (0.025587) | 0.684965 / 1.386936 (-0.701971) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008253 / 0.011353 (-0.003100) | 0.005745 / 0.011008 (-0.005263) | 0.085848 / 0.038508 (0.047340) | 0.038753 / 0.023109 (0.015644) | 0.401278 / 0.275898 (0.125379) | 0.433132 / 0.323480 (0.109652) | 0.006112 / 0.007986 (-0.001874) | 0.005973 / 0.004328 (0.001644) | 0.085339 / 0.004250 (0.081088) | 0.053297 / 0.037052 (0.016244) | 0.400265 / 0.258489 (0.141776) | 0.455155 / 0.293841 (0.161314) | 0.043116 / 0.128546 (-0.085430) | 0.013957 / 0.075646 (-0.061689) | 0.099507 / 0.419271 (-0.319764) | 0.058858 / 0.043533 (0.015325) | 0.398030 / 0.255139 (0.142891) | 0.418171 / 0.283200 (0.134971) | 0.114392 / 0.141683 (-0.027291) | 1.683102 / 1.452155 (0.230947) | 1.801427 / 1.492716 (0.308711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242271 / 0.018006 (0.224265) | 0.494920 / 0.000490 (0.494430) | 0.007328 / 0.000200 (0.007128) | 0.000144 / 0.000054 (0.000090) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034061 / 0.037411 (-0.003351) | 0.146417 / 0.014526 (0.131891) | 0.161079 / 0.176557 (-0.015477) | 0.213999 / 0.737135 (-0.523137) | 0.166704 / 0.296338 (-0.129634) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491214 / 0.215209 (0.276005) | 4.846946 / 2.077655 (2.769291) | 2.352595 / 1.504120 (0.848475) | 2.114055 / 1.541195 (0.572860) | 2.213537 / 1.468490 (0.745047) | 0.799625 / 4.584777 (-3.785152) | 4.440519 / 3.745712 (0.694807) | 4.476103 / 5.269862 (-0.793758) | 2.249384 / 4.565676 (-2.316292) | 0.098807 / 0.424275 (-0.325468) | 0.014463 / 0.007607 (0.006856) | 0.611793 / 0.226044 (0.385748) | 6.045710 / 2.268929 (3.776782) | 2.865957 / 55.444624 (-52.578667) | 2.454052 / 6.876477 (-4.422425) | 2.606153 / 2.142072 (0.464080) | 0.969057 / 4.805227 (-3.836170) | 0.198499 / 6.500664 (-6.302166) | 0.077012 / 0.075469 (0.001543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.497020 / 1.841788 (-0.344767) | 17.834277 / 8.074308 (9.759969) | 16.413792 / 10.191392 (6.222400) | 0.201979 / 0.680424 (-0.478445) | 0.020627 / 0.534201 (-0.513574) | 0.499767 / 0.579283 (-0.079516) | 0.496982 / 0.434364 (0.062618) | 0.579554 / 0.540337 (0.039216) | 0.693287 / 1.386936 (-0.693649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a1a3fee942ae159ff6cfe6a23b343605e7e12f55 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007461 / 0.011353 (-0.003892) | 0.005341 / 0.011008 (-0.005668) | 0.099252 / 0.038508 (0.060744) | 0.034723 / 0.023109 (0.011614) | 0.300980 / 0.275898 (0.025082) | 0.353860 / 0.323480 (0.030380) | 0.006100 / 0.007986 (-0.001885) | 0.004149 / 0.004328 (-0.000180) | 0.074765 / 0.004250 (0.070514) | 0.052226 / 0.037052 (0.015174) | 0.305098 / 0.258489 (0.046609) | 0.357445 / 0.293841 (0.063604) | 0.036129 / 0.128546 (-0.092417) | 0.012482 / 0.075646 (-0.063165) | 0.333321 / 0.419271 (-0.085951) | 0.050489 / 0.043533 (0.006956) | 0.294728 / 0.255139 (0.039589) | 0.322722 / 0.283200 (0.039523) | 0.101226 / 0.141683 (-0.040456) | 1.436787 / 1.452155 (-0.015367) | 1.515784 / 1.492716 (0.023068) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291836 / 0.018006 (0.273830) | 0.550735 / 0.000490 (0.550245) | 0.003828 / 0.000200 (0.003628) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028490 / 0.037411 (-0.008922) | 0.109543 / 0.014526 (0.095017) | 0.119451 / 0.176557 (-0.057105) | 0.176721 / 0.737135 (-0.560415) | 0.126711 / 0.296338 (-0.169628) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418863 / 0.215209 (0.203654) | 4.179167 / 2.077655 (2.101512) | 1.965126 / 1.504120 (0.461006) | 1.775544 / 1.541195 (0.234349) | 1.882667 / 1.468490 (0.414177) | 0.709201 / 4.584777 (-3.875576) | 3.754780 / 3.745712 (0.009068) | 2.175324 / 5.269862 (-3.094538) | 1.477454 / 4.565676 (-3.088223) | 0.085527 / 0.424275 (-0.338748) | 0.012685 / 0.007607 (0.005078) | 0.514276 / 0.226044 (0.288231) | 5.140518 / 2.268929 (2.871589) | 2.436011 / 55.444624 (-53.008614) | 2.114355 / 6.876477 (-4.762122) | 2.278893 / 2.142072 (0.136821) | 0.847825 / 4.805227 (-3.957402) | 0.169579 / 6.500664 (-6.331086) | 0.065306 / 0.075469 (-0.010163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190376 / 1.841788 (-0.651411) | 14.756581 / 8.074308 (6.682272) | 14.622610 / 10.191392 (4.431218) | 0.168186 / 0.680424 (-0.512238) | 0.017527 / 0.534201 (-0.516674) | 0.427808 / 0.579283 (-0.151475) | 0.437278 / 0.434364 (0.002914) | 0.509242 / 0.540337 (-0.031095) | 0.602500 / 1.386936 (-0.784436) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007331 / 0.011353 (-0.004022) | 0.005703 / 0.011008 (-0.005305) | 0.074992 / 0.038508 (0.036484) | 0.034069 / 0.023109 (0.010960) | 0.343513 / 0.275898 (0.067615) | 0.369061 / 0.323480 (0.045582) | 0.006034 / 0.007986 (-0.001951) | 0.004344 / 0.004328 (0.000016) | 0.074678 / 0.004250 (0.070428) | 0.052262 / 0.037052 (0.015210) | 0.364758 / 0.258489 (0.106269) | 0.401130 / 0.293841 (0.107289) | 0.037635 / 0.128546 (-0.090912) | 0.012599 / 0.075646 (-0.063047) | 0.086935 / 0.419271 (-0.332337) | 0.058161 / 0.043533 (0.014628) | 0.338727 / 0.255139 (0.083589) | 0.355957 / 0.283200 (0.072757) | 0.111607 / 0.141683 (-0.030076) | 1.454357 / 1.452155 (0.002202) | 1.591529 / 1.492716 (0.098813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284379 / 0.018006 (0.266373) | 0.550720 / 0.000490 (0.550230) | 0.002868 / 0.000200 (0.002668) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028876 / 0.037411 (-0.008535) | 0.110892 / 0.014526 (0.096366) | 0.122519 / 0.176557 (-0.054038) | 0.169774 / 0.737135 (-0.567361) | 0.129381 / 0.296338 (-0.166957) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429181 / 0.215209 (0.213972) | 4.251016 / 2.077655 (2.173361) | 2.056778 / 1.504120 (0.552658) | 1.860458 / 1.541195 (0.319264) | 1.958923 / 1.468490 (0.490432) | 0.712667 / 4.584777 (-3.872110) | 3.856910 / 3.745712 (0.111198) | 3.374535 / 5.269862 (-1.895327) | 1.846744 / 4.565676 (-2.718932) | 0.087238 / 0.424275 (-0.337037) | 0.012718 / 0.007607 (0.005111) | 0.524654 / 0.226044 (0.298609) | 5.209756 / 2.268929 (2.940827) | 2.494882 / 55.444624 (-52.949743) | 2.201150 / 6.876477 (-4.675327) | 2.274189 / 2.142072 (0.132117) | 0.844728 / 4.805227 (-3.960499) | 0.167467 / 6.500664 (-6.333197) | 0.064018 / 0.075469 (-0.011451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273284 / 1.841788 (-0.568503) | 15.104413 / 8.074308 (7.030105) | 15.134025 / 10.191392 (4.942633) | 0.147568 / 0.680424 (-0.532856) | 0.017429 / 0.534201 (-0.516772) | 0.422052 / 0.579283 (-0.157231) | 0.425786 / 0.434364 (-0.008578) | 0.491753 / 0.540337 (-0.048584) | 0.585091 / 1.386936 (-0.801845) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f3d26e74898e0a9dc0d78490104e2e173269ef5b \"CML watermark\")\n" ]
1,678,910,574,000
1,678,976,444,000
1,678,975,975,000
CONTRIBUTOR
null
To address https://github.com/huggingface/datasets/discussions/5593.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5644/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5644", "html_url": "https://github.com/huggingface/datasets/pull/5644", "diff_url": "https://github.com/huggingface/datasets/pull/5644.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5644.patch", "merged_at": "2023-03-16T14:12:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/5643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5643/comments
https://api.github.com/repos/huggingface/datasets/issues/5643/events
https://github.com/huggingface/datasets/pull/5643
1,626,160,220
PR_kwDODunzps5MI9zO
5,643
Support PyArrow arrays as column values in `from_dict`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006665 / 0.011353 (-0.004688) | 0.004842 / 0.011008 (-0.006166) | 0.097802 / 0.038508 (0.059294) | 0.032292 / 0.023109 (0.009182) | 0.327522 / 0.275898 (0.051624) | 0.351851 / 0.323480 (0.028371) | 0.005197 / 0.007986 (-0.002789) | 0.003781 / 0.004328 (-0.000547) | 0.073213 / 0.004250 (0.068963) | 0.045819 / 0.037052 (0.008767) | 0.331323 / 0.258489 (0.072834) | 0.376978 / 0.293841 (0.083137) | 0.035014 / 0.128546 (-0.093532) | 0.011853 / 0.075646 (-0.063793) | 0.344031 / 0.419271 (-0.075240) | 0.049094 / 0.043533 (0.005561) | 0.327054 / 0.255139 (0.071915) | 0.349053 / 0.283200 (0.065853) | 0.095413 / 0.141683 (-0.046269) | 1.451593 / 1.452155 (-0.000562) | 1.505568 / 1.492716 (0.012851) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211624 / 0.018006 (0.193618) | 0.437569 / 0.000490 (0.437079) | 0.003775 / 0.000200 (0.003575) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025915 / 0.037411 (-0.011496) | 0.104085 / 0.014526 (0.089559) | 0.111064 / 0.176557 (-0.065493) | 0.167316 / 0.737135 (-0.569819) | 0.117255 / 0.296338 (-0.179084) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424241 / 0.215209 (0.209032) | 4.251365 / 2.077655 (2.173710) | 2.074036 / 1.504120 (0.569916) | 1.858022 / 1.541195 (0.316828) | 1.819929 / 1.468490 (0.351439) | 0.704153 / 4.584777 (-3.880624) | 3.750506 / 3.745712 (0.004794) | 3.149836 / 5.269862 (-2.120026) | 1.729540 / 4.565676 (-2.836137) | 0.087287 / 0.424275 (-0.336988) | 0.012304 / 0.007607 (0.004697) | 0.513811 / 0.226044 (0.287767) | 5.129427 / 2.268929 (2.860498) | 2.489253 / 55.444624 (-52.955371) | 2.122746 / 6.876477 (-4.753730) | 2.208528 / 2.142072 (0.066456) | 0.843386 / 4.805227 (-3.961841) | 0.169320 / 6.500664 (-6.331344) | 0.064085 / 0.075469 (-0.011384) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184361 / 1.841788 (-0.657427) | 14.013478 / 8.074308 (5.939170) | 13.936774 / 10.191392 (3.745382) | 0.138009 / 0.680424 (-0.542415) | 0.017192 / 0.534201 (-0.517009) | 0.420938 / 0.579283 (-0.158345) | 0.413390 / 0.434364 (-0.020974) | 0.500244 / 0.540337 (-0.040094) | 0.582499 / 1.386936 (-0.804437) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006709 / 0.011353 (-0.004643) | 0.004847 / 0.011008 (-0.006161) | 0.074740 / 0.038508 (0.036232) | 0.032126 / 0.023109 (0.009017) | 0.343248 / 0.275898 (0.067350) | 0.376822 / 0.323480 (0.053342) | 0.005547 / 0.007986 (-0.002439) | 0.005080 / 0.004328 (0.000752) | 0.074634 / 0.004250 (0.070384) | 0.044735 / 0.037052 (0.007682) | 0.357895 / 0.258489 (0.099406) | 0.401150 / 0.293841 (0.107310) | 0.035485 / 0.128546 (-0.093061) | 0.011978 / 0.075646 (-0.063668) | 0.087567 / 0.419271 (-0.331704) | 0.050233 / 0.043533 (0.006701) | 0.337476 / 0.255139 (0.082337) | 0.385064 / 0.283200 (0.101865) | 0.102733 / 0.141683 (-0.038950) | 1.456238 / 1.452155 (0.004083) | 1.539468 / 1.492716 (0.046752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203156 / 0.018006 (0.185149) | 0.448898 / 0.000490 (0.448408) | 0.002843 / 0.000200 (0.002644) | 0.000222 / 0.000054 (0.000168) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027836 / 0.037411 (-0.009576) | 0.109889 / 0.014526 (0.095364) | 0.119378 / 0.176557 (-0.057179) | 0.171208 / 0.737135 (-0.565927) | 0.124240 / 0.296338 (-0.172098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425374 / 0.215209 (0.210165) | 4.252994 / 2.077655 (2.175339) | 2.006410 / 1.504120 (0.502290) | 1.812821 / 1.541195 (0.271626) | 1.857618 / 1.468490 (0.389128) | 0.714564 / 4.584777 (-3.870213) | 3.803040 / 3.745712 (0.057328) | 2.075452 / 5.269862 (-3.194410) | 1.344868 / 4.565676 (-3.220809) | 0.088705 / 0.424275 (-0.335570) | 0.012481 / 0.007607 (0.004874) | 0.528022 / 0.226044 (0.301977) | 5.268878 / 2.268929 (2.999949) | 2.467858 / 55.444624 (-52.976767) | 2.138681 / 6.876477 (-4.737796) | 2.134928 / 2.142072 (-0.007145) | 0.851518 / 4.805227 (-3.953709) | 0.175085 / 6.500664 (-6.325579) | 0.063555 / 0.075469 (-0.011914) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265788 / 1.841788 (-0.576000) | 14.683444 / 8.074308 (6.609136) | 14.055848 / 10.191392 (3.864456) | 0.145260 / 0.680424 (-0.535164) | 0.017064 / 0.534201 (-0.517137) | 0.424836 / 0.579283 (-0.154447) | 0.418345 / 0.434364 (-0.016019) | 0.491408 / 0.540337 (-0.048930) | 0.594387 / 1.386936 (-0.792549) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#10c3f32c228cc7011ce456498942e6a2a5dc3086 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006870 / 0.011353 (-0.004483) | 0.004602 / 0.011008 (-0.006406) | 0.100075 / 0.038508 (0.061567) | 0.028720 / 0.023109 (0.005611) | 0.304212 / 0.275898 (0.028314) | 0.348423 / 0.323480 (0.024943) | 0.005266 / 0.007986 (-0.002720) | 0.003473 / 0.004328 (-0.000855) | 0.077563 / 0.004250 (0.073313) | 0.040066 / 0.037052 (0.003013) | 0.304039 / 0.258489 (0.045550) | 0.348721 / 0.293841 (0.054881) | 0.032127 / 0.128546 (-0.096419) | 0.011583 / 0.075646 (-0.064063) | 0.326853 / 0.419271 (-0.092418) | 0.043158 / 0.043533 (-0.000375) | 0.310111 / 0.255139 (0.054973) | 0.332869 / 0.283200 (0.049670) | 0.088384 / 0.141683 (-0.053299) | 1.509245 / 1.452155 (0.057091) | 1.575393 / 1.492716 (0.082677) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212839 / 0.018006 (0.194833) | 0.431407 / 0.000490 (0.430918) | 0.002639 / 0.000200 (0.002439) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024945 / 0.037411 (-0.012466) | 0.101312 / 0.014526 (0.086787) | 0.107873 / 0.176557 (-0.068683) | 0.169579 / 0.737135 (-0.567556) | 0.109922 / 0.296338 (-0.186417) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422091 / 0.215209 (0.206882) | 4.227174 / 2.077655 (2.149519) | 1.957964 / 1.504120 (0.453844) | 1.812076 / 1.541195 (0.270882) | 1.966666 / 1.468490 (0.498176) | 0.698710 / 4.584777 (-3.886067) | 3.431824 / 3.745712 (-0.313888) | 1.898646 / 5.269862 (-3.371215) | 1.172096 / 4.565676 (-3.393581) | 0.083383 / 0.424275 (-0.340892) | 0.012793 / 0.007607 (0.005186) | 0.522501 / 0.226044 (0.296457) | 5.240049 / 2.268929 (2.971121) | 2.349286 / 55.444624 (-53.095338) | 2.051117 / 6.876477 (-4.825360) | 2.255652 / 2.142072 (0.113580) | 0.813668 / 4.805227 (-3.991560) | 0.153770 / 6.500664 (-6.346894) | 0.068323 / 0.075469 (-0.007146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197204 / 1.841788 (-0.644584) | 14.146212 / 8.074308 (6.071904) | 14.469765 / 10.191392 (4.278373) | 0.130024 / 0.680424 (-0.550400) | 0.016858 / 0.534201 (-0.517343) | 0.382949 / 0.579283 (-0.196334) | 0.393414 / 0.434364 (-0.040950) | 0.447910 / 0.540337 (-0.092427) | 0.529842 / 1.386936 (-0.857094) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006903 / 0.011353 (-0.004450) | 0.004695 / 0.011008 (-0.006313) | 0.077457 / 0.038508 (0.038949) | 0.028624 / 0.023109 (0.005514) | 0.340767 / 0.275898 (0.064869) | 0.378811 / 0.323480 (0.055331) | 0.005996 / 0.007986 (-0.001990) | 0.003481 / 0.004328 (-0.000848) | 0.076284 / 0.004250 (0.072034) | 0.042564 / 0.037052 (0.005511) | 0.340908 / 0.258489 (0.082419) | 0.384952 / 0.293841 (0.091111) | 0.032057 / 0.128546 (-0.096489) | 0.011697 / 0.075646 (-0.063949) | 0.085941 / 0.419271 (-0.333331) | 0.042464 / 0.043533 (-0.001069) | 0.339309 / 0.255139 (0.084170) | 0.368105 / 0.283200 (0.084905) | 0.093382 / 0.141683 (-0.048301) | 1.467220 / 1.452155 (0.015065) | 1.563105 / 1.492716 (0.070389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260631 / 0.018006 (0.242625) | 0.418155 / 0.000490 (0.417665) | 0.009539 / 0.000200 (0.009339) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025494 / 0.037411 (-0.011917) | 0.106034 / 0.014526 (0.091508) | 0.109878 / 0.176557 (-0.066678) | 0.160754 / 0.737135 (-0.576382) | 0.113226 / 0.296338 (-0.183112) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442989 / 0.215209 (0.227780) | 4.447040 / 2.077655 (2.369385) | 2.082529 / 1.504120 (0.578409) | 1.876952 / 1.541195 (0.335757) | 1.968341 / 1.468490 (0.499851) | 0.704317 / 4.584777 (-3.880460) | 3.466190 / 3.745712 (-0.279523) | 1.924954 / 5.269862 (-3.344908) | 1.199763 / 4.565676 (-3.365913) | 0.084320 / 0.424275 (-0.339955) | 0.012956 / 0.007607 (0.005349) | 0.538905 / 0.226044 (0.312861) | 5.426593 / 2.268929 (3.157665) | 2.509287 / 55.444624 (-52.935338) | 2.174829 / 6.876477 (-4.701648) | 2.239214 / 2.142072 (0.097141) | 0.810031 / 4.805227 (-3.995196) | 0.153534 / 6.500664 (-6.347130) | 0.069578 / 0.075469 (-0.005891) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294068 / 1.841788 (-0.547720) | 14.601899 / 8.074308 (6.527591) | 14.469282 / 10.191392 (4.277890) | 0.130024 / 0.680424 (-0.550400) | 0.016895 / 0.534201 (-0.517306) | 0.382583 / 0.579283 (-0.196700) | 0.388938 / 0.434364 (-0.045426) | 0.448416 / 0.540337 (-0.091922) | 0.533261 / 1.386936 (-0.853675) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7b2af47647152d39a3acade256da898cb396e4d9 \"CML watermark\")\n" ]
1,678,908,760,000
1,678,987,386,000
1,678,986,940,000
CONTRIBUTOR
null
For consistency with `pa.Table.from_pydict`, which supports both Python lists and PyArrow arrays as column values. "Fixes" https://discuss.huggingface.co/t/pyarrow-lib-floatarray-did-not-recognize-python-value-type-when-inferring-an-arrow-data-type/33417
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5643/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5643", "html_url": "https://github.com/huggingface/datasets/pull/5643", "diff_url": "https://github.com/huggingface/datasets/pull/5643.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5643.patch", "merged_at": "2023-03-16T17:15:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/5642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5642/comments
https://api.github.com/repos/huggingface/datasets/issues/5642/events
https://github.com/huggingface/datasets/pull/5642
1,626,043,177
PR_kwDODunzps5MIjw9
5,642
Bump hfh to 0.11.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006334 / 0.011353 (-0.005018) | 0.004447 / 0.011008 (-0.006561) | 0.099287 / 0.038508 (0.060779) | 0.027426 / 0.023109 (0.004317) | 0.322638 / 0.275898 (0.046740) | 0.370501 / 0.323480 (0.047021) | 0.004775 / 0.007986 (-0.003210) | 0.003289 / 0.004328 (-0.001040) | 0.076531 / 0.004250 (0.072280) | 0.037485 / 0.037052 (0.000432) | 0.335634 / 0.258489 (0.077145) | 0.384031 / 0.293841 (0.090190) | 0.031258 / 0.128546 (-0.097288) | 0.011619 / 0.075646 (-0.064027) | 0.326309 / 0.419271 (-0.092963) | 0.042513 / 0.043533 (-0.001020) | 0.340817 / 0.255139 (0.085678) | 0.369846 / 0.283200 (0.086646) | 0.084904 / 0.141683 (-0.056779) | 1.481739 / 1.452155 (0.029584) | 1.566593 / 1.492716 (0.073877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186424 / 0.018006 (0.168418) | 0.400879 / 0.000490 (0.400389) | 0.003520 / 0.000200 (0.003320) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023287 / 0.037411 (-0.014124) | 0.097767 / 0.014526 (0.083241) | 0.103271 / 0.176557 (-0.073286) | 0.165414 / 0.737135 (-0.571722) | 0.106437 / 0.296338 (-0.189901) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422711 / 0.215209 (0.207502) | 4.221382 / 2.077655 (2.143727) | 1.906807 / 1.504120 (0.402687) | 1.709595 / 1.541195 (0.168400) | 1.720452 / 1.468490 (0.251962) | 0.699477 / 4.584777 (-3.885300) | 3.415840 / 3.745712 (-0.329873) | 2.835669 / 5.269862 (-2.434192) | 1.501775 / 4.565676 (-3.063901) | 0.082896 / 0.424275 (-0.341379) | 0.012855 / 0.007607 (0.005248) | 0.514373 / 0.226044 (0.288329) | 5.190000 / 2.268929 (2.921071) | 2.302539 / 55.444624 (-53.142086) | 1.963410 / 6.876477 (-4.913067) | 2.020944 / 2.142072 (-0.121128) | 0.805919 / 4.805227 (-3.999308) | 0.150604 / 6.500664 (-6.350060) | 0.065977 / 0.075469 (-0.009492) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206487 / 1.841788 (-0.635300) | 13.631513 / 8.074308 (5.557205) | 13.800258 / 10.191392 (3.608866) | 0.146914 / 0.680424 (-0.533509) | 0.016454 / 0.534201 (-0.517747) | 0.377752 / 0.579283 (-0.201532) | 0.384312 / 0.434364 (-0.050052) | 0.434912 / 0.540337 (-0.105425) | 0.522507 / 1.386936 (-0.864429) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006328 / 0.011353 (-0.005025) | 0.004406 / 0.011008 (-0.006602) | 0.077951 / 0.038508 (0.039443) | 0.026716 / 0.023109 (0.003607) | 0.337303 / 0.275898 (0.061405) | 0.372036 / 0.323480 (0.048556) | 0.004800 / 0.007986 (-0.003185) | 0.003153 / 0.004328 (-0.001175) | 0.076823 / 0.004250 (0.072573) | 0.035873 / 0.037052 (-0.001179) | 0.340243 / 0.258489 (0.081754) | 0.380183 / 0.293841 (0.086342) | 0.032185 / 0.128546 (-0.096361) | 0.011545 / 0.075646 (-0.064101) | 0.086887 / 0.419271 (-0.332384) | 0.041560 / 0.043533 (-0.001973) | 0.338716 / 0.255139 (0.083577) | 0.363080 / 0.283200 (0.079881) | 0.088375 / 0.141683 (-0.053308) | 1.499004 / 1.452155 (0.046850) | 1.585904 / 1.492716 (0.093188) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211645 / 0.018006 (0.193639) | 0.403707 / 0.000490 (0.403218) | 0.000415 / 0.000200 (0.000215) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024972 / 0.037411 (-0.012440) | 0.097996 / 0.014526 (0.083470) | 0.105941 / 0.176557 (-0.070616) | 0.155521 / 0.737135 (-0.581615) | 0.108246 / 0.296338 (-0.188092) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442316 / 0.215209 (0.227107) | 4.417977 / 2.077655 (2.340322) | 2.078324 / 1.504120 (0.574205) | 1.863678 / 1.541195 (0.322483) | 1.917149 / 1.468490 (0.448659) | 0.697628 / 4.584777 (-3.887149) | 3.412810 / 3.745712 (-0.332902) | 1.866473 / 5.269862 (-3.403389) | 1.155923 / 4.565676 (-3.409754) | 0.082831 / 0.424275 (-0.341444) | 0.012367 / 0.007607 (0.004760) | 0.540018 / 0.226044 (0.313974) | 5.420472 / 2.268929 (3.151544) | 2.508540 / 55.444624 (-52.936084) | 2.166397 / 6.876477 (-4.710080) | 2.153486 / 2.142072 (0.011414) | 0.804860 / 4.805227 (-4.000367) | 0.151178 / 6.500664 (-6.349486) | 0.067870 / 0.075469 (-0.007599) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310387 / 1.841788 (-0.531400) | 13.908916 / 8.074308 (5.834608) | 14.136895 / 10.191392 (3.945503) | 0.139389 / 0.680424 (-0.541035) | 0.016687 / 0.534201 (-0.517514) | 0.379624 / 0.579283 (-0.199659) | 0.382634 / 0.434364 (-0.051730) | 0.439632 / 0.540337 (-0.100706) | 0.524913 / 1.386936 (-0.862023) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8f2143b4ed39b58ed415029e7838d767662da91 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006365 / 0.011353 (-0.004988) | 0.004457 / 0.011008 (-0.006551) | 0.097989 / 0.038508 (0.059481) | 0.027686 / 0.023109 (0.004577) | 0.357412 / 0.275898 (0.081514) | 0.368573 / 0.323480 (0.045093) | 0.004859 / 0.007986 (-0.003127) | 0.003262 / 0.004328 (-0.001066) | 0.076487 / 0.004250 (0.072237) | 0.035526 / 0.037052 (-0.001527) | 0.332862 / 0.258489 (0.074373) | 0.369334 / 0.293841 (0.075493) | 0.030750 / 0.128546 (-0.097796) | 0.011503 / 0.075646 (-0.064143) | 0.323289 / 0.419271 (-0.095982) | 0.042302 / 0.043533 (-0.001231) | 0.334009 / 0.255139 (0.078870) | 0.354150 / 0.283200 (0.070951) | 0.082895 / 0.141683 (-0.058788) | 1.499727 / 1.452155 (0.047572) | 1.574123 / 1.492716 (0.081407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192583 / 0.018006 (0.174577) | 0.408136 / 0.000490 (0.407646) | 0.001272 / 0.000200 (0.001072) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022883 / 0.037411 (-0.014528) | 0.095710 / 0.014526 (0.081185) | 0.106545 / 0.176557 (-0.070011) | 0.165784 / 0.737135 (-0.571352) | 0.108594 / 0.296338 (-0.187744) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429483 / 0.215209 (0.214274) | 4.292338 / 2.077655 (2.214683) | 1.917759 / 1.504120 (0.413639) | 1.711489 / 1.541195 (0.170294) | 1.735668 / 1.468490 (0.267178) | 0.707602 / 4.584777 (-3.877175) | 3.369643 / 3.745712 (-0.376070) | 1.874517 / 5.269862 (-3.395344) | 1.248560 / 4.565676 (-3.317117) | 0.083247 / 0.424275 (-0.341028) | 0.012606 / 0.007607 (0.004999) | 0.519342 / 0.226044 (0.293297) | 5.225462 / 2.268929 (2.956533) | 2.433230 / 55.444624 (-53.011394) | 2.006005 / 6.876477 (-4.870471) | 2.093156 / 2.142072 (-0.048916) | 0.809372 / 4.805227 (-3.995855) | 0.151691 / 6.500664 (-6.348973) | 0.066680 / 0.075469 (-0.008789) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226283 / 1.841788 (-0.615505) | 13.604338 / 8.074308 (5.530030) | 13.953245 / 10.191392 (3.761853) | 0.132904 / 0.680424 (-0.547520) | 0.016420 / 0.534201 (-0.517781) | 0.395316 / 0.579283 (-0.183967) | 0.385003 / 0.434364 (-0.049361) | 0.483303 / 0.540337 (-0.057034) | 0.578459 / 1.386936 (-0.808477) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006218 / 0.011353 (-0.005135) | 0.004451 / 0.011008 (-0.006557) | 0.076892 / 0.038508 (0.038384) | 0.027017 / 0.023109 (0.003908) | 0.356976 / 0.275898 (0.081078) | 0.396083 / 0.323480 (0.072603) | 0.005510 / 0.007986 (-0.002476) | 0.003265 / 0.004328 (-0.001063) | 0.075771 / 0.004250 (0.071521) | 0.037117 / 0.037052 (0.000064) | 0.362181 / 0.258489 (0.103692) | 0.401771 / 0.293841 (0.107931) | 0.032062 / 0.128546 (-0.096484) | 0.011453 / 0.075646 (-0.064194) | 0.085773 / 0.419271 (-0.333498) | 0.041679 / 0.043533 (-0.001854) | 0.355120 / 0.255139 (0.099981) | 0.390170 / 0.283200 (0.106970) | 0.088210 / 0.141683 (-0.053473) | 1.526434 / 1.452155 (0.074279) | 1.586019 / 1.492716 (0.093302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196836 / 0.018006 (0.178830) | 0.401161 / 0.000490 (0.400671) | 0.002880 / 0.000200 (0.002680) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024445 / 0.037411 (-0.012966) | 0.100187 / 0.014526 (0.085661) | 0.106391 / 0.176557 (-0.070165) | 0.159764 / 0.737135 (-0.577372) | 0.109828 / 0.296338 (-0.186511) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444228 / 0.215209 (0.229018) | 4.420769 / 2.077655 (2.343114) | 2.069437 / 1.504120 (0.565318) | 1.862587 / 1.541195 (0.321392) | 1.934627 / 1.468490 (0.466137) | 0.699681 / 4.584777 (-3.885095) | 3.352540 / 3.745712 (-0.393172) | 2.613172 / 5.269862 (-2.656689) | 1.445116 / 4.565676 (-3.120561) | 0.083086 / 0.424275 (-0.341189) | 0.012715 / 0.007607 (0.005108) | 0.537450 / 0.226044 (0.311405) | 5.403052 / 2.268929 (3.134123) | 2.506703 / 55.444624 (-52.937921) | 2.170198 / 6.876477 (-4.706279) | 2.201909 / 2.142072 (0.059837) | 0.799555 / 4.805227 (-4.005672) | 0.150825 / 6.500664 (-6.349839) | 0.067234 / 0.075469 (-0.008235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293097 / 1.841788 (-0.548691) | 13.817133 / 8.074308 (5.742825) | 14.247231 / 10.191392 (4.055839) | 0.128422 / 0.680424 (-0.552002) | 0.016541 / 0.534201 (-0.517660) | 0.382466 / 0.579283 (-0.196817) | 0.380560 / 0.434364 (-0.053804) | 0.439061 / 0.540337 (-0.101276) | 0.521865 / 1.386936 (-0.865071) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69e60be438c334919f590512fd664436bd6b3667 \"CML watermark\")\n", "I also took the liberty of removing `_hf_hub_fixes.py` completely :)\r\n\r\n> Do you think this is really necessary and convenient? I would naively say that 5% of the users is not a negligible number...\r\n\r\nI think it's ok. Most of them are using old versions of `datasets` anyway.\r\n\r\n", "merging, but lmk if you have other concerns", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006810 / 0.011353 (-0.004543) | 0.004683 / 0.011008 (-0.006325) | 0.100889 / 0.038508 (0.062381) | 0.030135 / 0.023109 (0.007026) | 0.356407 / 0.275898 (0.080509) | 0.389175 / 0.323480 (0.065695) | 0.005358 / 0.007986 (-0.002627) | 0.004760 / 0.004328 (0.000432) | 0.075904 / 0.004250 (0.071654) | 0.040341 / 0.037052 (0.003288) | 0.357363 / 0.258489 (0.098874) | 0.394185 / 0.293841 (0.100344) | 0.031322 / 0.128546 (-0.097224) | 0.011636 / 0.075646 (-0.064010) | 0.327327 / 0.419271 (-0.091944) | 0.042494 / 0.043533 (-0.001039) | 0.338079 / 0.255139 (0.082940) | 0.363388 / 0.283200 (0.080189) | 0.087102 / 0.141683 (-0.054581) | 1.505686 / 1.452155 (0.053531) | 1.562112 / 1.492716 (0.069396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203630 / 0.018006 (0.185624) | 0.425986 / 0.000490 (0.425496) | 0.003786 / 0.000200 (0.003586) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024138 / 0.037411 (-0.013274) | 0.101752 / 0.014526 (0.087226) | 0.105436 / 0.176557 (-0.071121) | 0.165385 / 0.737135 (-0.571750) | 0.114510 / 0.296338 (-0.181828) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447561 / 0.215209 (0.232352) | 4.449212 / 2.077655 (2.371557) | 2.169472 / 1.504120 (0.665352) | 1.989025 / 1.541195 (0.447831) | 2.036267 / 1.468490 (0.567776) | 0.698647 / 4.584777 (-3.886130) | 3.483281 / 3.745712 (-0.262431) | 1.949306 / 5.269862 (-3.320555) | 1.290313 / 4.565676 (-3.275363) | 0.083079 / 0.424275 (-0.341196) | 0.012759 / 0.007607 (0.005152) | 0.540944 / 0.226044 (0.314899) | 5.473391 / 2.268929 (3.204463) | 2.632037 / 55.444624 (-52.812587) | 2.327396 / 6.876477 (-4.549081) | 2.428880 / 2.142072 (0.286808) | 0.808918 / 4.805227 (-3.996309) | 0.153283 / 6.500664 (-6.347381) | 0.068325 / 0.075469 (-0.007145) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212527 / 1.841788 (-0.629260) | 14.306444 / 8.074308 (6.232136) | 14.904980 / 10.191392 (4.713588) | 0.142796 / 0.680424 (-0.537628) | 0.016829 / 0.534201 (-0.517372) | 0.384806 / 0.579283 (-0.194477) | 0.390505 / 0.434364 (-0.043859) | 0.441734 / 0.540337 (-0.098603) | 0.526159 / 1.386936 (-0.860777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006950 / 0.011353 (-0.004403) | 0.004647 / 0.011008 (-0.006362) | 0.078925 / 0.038508 (0.040417) | 0.028081 / 0.023109 (0.004971) | 0.343420 / 0.275898 (0.067522) | 0.380567 / 0.323480 (0.057087) | 0.005286 / 0.007986 (-0.002700) | 0.004816 / 0.004328 (0.000487) | 0.077332 / 0.004250 (0.073081) | 0.042131 / 0.037052 (0.005078) | 0.345371 / 0.258489 (0.086882) | 0.390232 / 0.293841 (0.096392) | 0.032395 / 0.128546 (-0.096152) | 0.011669 / 0.075646 (-0.063978) | 0.087649 / 0.419271 (-0.331622) | 0.042465 / 0.043533 (-0.001068) | 0.342863 / 0.255139 (0.087724) | 0.368947 / 0.283200 (0.085748) | 0.091725 / 0.141683 (-0.049958) | 1.477435 / 1.452155 (0.025280) | 1.563449 / 1.492716 (0.070733) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208016 / 0.018006 (0.190010) | 0.428387 / 0.000490 (0.427898) | 0.000443 / 0.000200 (0.000243) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026963 / 0.037411 (-0.010449) | 0.103854 / 0.014526 (0.089328) | 0.109068 / 0.176557 (-0.067488) | 0.160107 / 0.737135 (-0.577028) | 0.112843 / 0.296338 (-0.183496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437161 / 0.215209 (0.221952) | 4.396178 / 2.077655 (2.318523) | 2.067597 / 1.504120 (0.563477) | 1.875247 / 1.541195 (0.334053) | 1.962451 / 1.468490 (0.493961) | 0.701427 / 4.584777 (-3.883350) | 3.459564 / 3.745712 (-0.286148) | 1.959482 / 5.269862 (-3.310380) | 1.191866 / 4.565676 (-3.373810) | 0.083243 / 0.424275 (-0.341032) | 0.012740 / 0.007607 (0.005133) | 0.535236 / 0.226044 (0.309191) | 5.351715 / 2.268929 (3.082786) | 2.490868 / 55.444624 (-52.953756) | 2.195680 / 6.876477 (-4.680797) | 2.233854 / 2.142072 (0.091781) | 0.809041 / 4.805227 (-3.996187) | 0.151498 / 6.500664 (-6.349166) | 0.068297 / 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303596 / 1.841788 (-0.538192) | 14.712746 / 8.074308 (6.638438) | 14.778412 / 10.191392 (4.587020) | 0.147093 / 0.680424 (-0.533331) | 0.017105 / 0.534201 (-0.517096) | 0.381687 / 0.579283 (-0.197596) | 0.402435 / 0.434364 (-0.031929) | 0.453538 / 0.540337 (-0.086800) | 0.538866 / 1.386936 (-0.848070) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#10f637c3a598c8042865b31f779e315a3da5337e \"CML watermark\")\n" ]
1,678,904,767,000
1,679,315,649,000
1,679,315,218,000
MEMBER
null
to fix errors like ``` requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/... ``` (e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997)) 0.11.0 is the current minimum version in `transformers` around 5% of users are currently using versions `<0.11.0`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5642/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5642", "html_url": "https://github.com/huggingface/datasets/pull/5642", "diff_url": "https://github.com/huggingface/datasets/pull/5642.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5642.patch", "merged_at": "2023-03-20T12:26:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5641/comments
https://api.github.com/repos/huggingface/datasets/issues/5641/events
https://github.com/huggingface/datasets/issues/5641
1,625,942,730
I_kwDODunzps5g6erK
5,641
Features cannot be named "self"
{ "login": "alialamiidrissi", "id": 14365168, "node_id": "MDQ6VXNlcjE0MzY1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alialamiidrissi", "html_url": "https://github.com/alialamiidrissi", "followers_url": "https://api.github.com/users/alialamiidrissi/followers", "following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}", "gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}", "starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions", "organizations_url": "https://api.github.com/users/alialamiidrissi/orgs", "repos_url": "https://api.github.com/users/alialamiidrissi/repos", "events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}", "received_events_url": "https://api.github.com/users/alialamiidrissi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,678,900,600,000
1,678,986,891,000
1,678,986,891,000
NONE
null
### Describe the bug Hi, I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`. The error seems to be coming from arguments validation in the `Features.from_dict` function. ### Steps to reproduce the bug ```python import datasets dummy_pandas = pd.DataFrame([0,1,2,3], columns = ["self"]) datasets.arrow_dataset.Dataset.from_pandas(dummy_pandas) ``` ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.8.0 - Python version: 3.9.5 - PyArrow version: 6.0.1 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5641/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5640/comments
https://api.github.com/repos/huggingface/datasets/issues/5640/events
https://github.com/huggingface/datasets/pull/5640
1,625,896,057
PR_kwDODunzps5MID3I
5,640
Less zip false positives
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006998 / 0.011353 (-0.004355) | 0.005093 / 0.011008 (-0.005916) | 0.100490 / 0.038508 (0.061982) | 0.032736 / 0.023109 (0.009627) | 0.297738 / 0.275898 (0.021840) | 0.322255 / 0.323480 (-0.001225) | 0.005583 / 0.007986 (-0.002402) | 0.004007 / 0.004328 (-0.000321) | 0.075863 / 0.004250 (0.071613) | 0.044212 / 0.037052 (0.007159) | 0.300033 / 0.258489 (0.041544) | 0.341997 / 0.293841 (0.048156) | 0.036172 / 0.128546 (-0.092374) | 0.012176 / 0.075646 (-0.063471) | 0.356052 / 0.419271 (-0.063220) | 0.050438 / 0.043533 (0.006905) | 0.294677 / 0.255139 (0.039538) | 0.318050 / 0.283200 (0.034850) | 0.104733 / 0.141683 (-0.036950) | 1.435681 / 1.452155 (-0.016474) | 1.534793 / 1.492716 (0.042076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242815 / 0.018006 (0.224809) | 0.565983 / 0.000490 (0.565494) | 0.006800 / 0.000200 (0.006600) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026548 / 0.037411 (-0.010863) | 0.104816 / 0.014526 (0.090290) | 0.116222 / 0.176557 (-0.060335) | 0.172143 / 0.737135 (-0.564992) | 0.121631 / 0.296338 (-0.174707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400126 / 0.215209 (0.184917) | 4.004538 / 2.077655 (1.926883) | 1.798822 / 1.504120 (0.294702) | 1.595191 / 1.541195 (0.053996) | 1.645777 / 1.468490 (0.177287) | 0.705643 / 4.584777 (-3.879134) | 3.750887 / 3.745712 (0.005175) | 2.136547 / 5.269862 (-3.133315) | 1.475881 / 4.565676 (-3.089795) | 0.086921 / 0.424275 (-0.337354) | 0.012379 / 0.007607 (0.004771) | 0.505824 / 0.226044 (0.279779) | 5.052364 / 2.268929 (2.783435) | 2.279983 / 55.444624 (-53.164641) | 1.932253 / 6.876477 (-4.944224) | 2.051359 / 2.142072 (-0.090714) | 0.851906 / 4.805227 (-3.953321) | 0.169566 / 6.500664 (-6.331098) | 0.064600 / 0.075469 (-0.010869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.165859 / 1.841788 (-0.675929) | 15.049950 / 8.074308 (6.975642) | 14.095981 / 10.191392 (3.904589) | 0.151779 / 0.680424 (-0.528645) | 0.017537 / 0.534201 (-0.516664) | 0.420164 / 0.579283 (-0.159119) | 0.418932 / 0.434364 (-0.015432) | 0.488749 / 0.540337 (-0.051588) | 0.582359 / 1.386936 (-0.804577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007426 / 0.011353 (-0.003927) | 0.005248 / 0.011008 (-0.005761) | 0.074118 / 0.038508 (0.035610) | 0.034223 / 0.023109 (0.011114) | 0.337780 / 0.275898 (0.061882) | 0.376300 / 0.323480 (0.052820) | 0.006142 / 0.007986 (-0.001843) | 0.004246 / 0.004328 (-0.000083) | 0.074177 / 0.004250 (0.069926) | 0.052698 / 0.037052 (0.015646) | 0.340229 / 0.258489 (0.081740) | 0.396172 / 0.293841 (0.102331) | 0.037293 / 0.128546 (-0.091253) | 0.012514 / 0.075646 (-0.063132) | 0.087144 / 0.419271 (-0.332128) | 0.051922 / 0.043533 (0.008390) | 0.333188 / 0.255139 (0.078049) | 0.355420 / 0.283200 (0.072220) | 0.110273 / 0.141683 (-0.031410) | 1.447826 / 1.452155 (-0.004329) | 1.561135 / 1.492716 (0.068419) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269203 / 0.018006 (0.251197) | 0.551997 / 0.000490 (0.551508) | 0.001558 / 0.000200 (0.001359) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029511 / 0.037411 (-0.007900) | 0.108614 / 0.014526 (0.094089) | 0.123438 / 0.176557 (-0.053118) | 0.171596 / 0.737135 (-0.565539) | 0.126828 / 0.296338 (-0.169511) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420520 / 0.215209 (0.205310) | 4.175672 / 2.077655 (2.098017) | 1.982220 / 1.504120 (0.478101) | 1.788575 / 1.541195 (0.247381) | 1.860840 / 1.468490 (0.392349) | 0.706730 / 4.584777 (-3.878047) | 3.858718 / 3.745712 (0.113005) | 3.069389 / 5.269862 (-2.200472) | 1.827603 / 4.565676 (-2.738073) | 0.087893 / 0.424275 (-0.336382) | 0.012613 / 0.007607 (0.005006) | 0.524177 / 0.226044 (0.298132) | 5.177077 / 2.268929 (2.908148) | 2.494397 / 55.444624 (-52.950227) | 2.189484 / 6.876477 (-4.686992) | 2.217626 / 2.142072 (0.075554) | 0.846326 / 4.805227 (-3.958901) | 0.176558 / 6.500664 (-6.324106) | 0.065018 / 0.075469 (-0.010451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268618 / 1.841788 (-0.573170) | 15.132711 / 8.074308 (7.058403) | 14.585530 / 10.191392 (4.394138) | 0.163454 / 0.680424 (-0.516970) | 0.017442 / 0.534201 (-0.516759) | 0.421746 / 0.579283 (-0.157537) | 0.425412 / 0.434364 (-0.008952) | 0.499178 / 0.540337 (-0.041159) | 0.595458 / 1.386936 (-0.791478) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab77e58cd32413f4ef4828134a2470ebd53bb542 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.005414 / 0.011008 (-0.005594) | 0.099226 / 0.038508 (0.060718) | 0.035442 / 0.023109 (0.012332) | 0.304851 / 0.275898 (0.028952) | 0.337144 / 0.323480 (0.013664) | 0.006162 / 0.007986 (-0.001823) | 0.004151 / 0.004328 (-0.000177) | 0.074708 / 0.004250 (0.070458) | 0.049690 / 0.037052 (0.012638) | 0.307658 / 0.258489 (0.049168) | 0.358472 / 0.293841 (0.064631) | 0.037181 / 0.128546 (-0.091365) | 0.012259 / 0.075646 (-0.063387) | 0.335426 / 0.419271 (-0.083846) | 0.050790 / 0.043533 (0.007257) | 0.301715 / 0.255139 (0.046576) | 0.320834 / 0.283200 (0.037634) | 0.102357 / 0.141683 (-0.039326) | 1.454750 / 1.452155 (0.002596) | 1.571994 / 1.492716 (0.079278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218708 / 0.018006 (0.200702) | 0.444391 / 0.000490 (0.443901) | 0.005717 / 0.000200 (0.005517) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028017 / 0.037411 (-0.009395) | 0.112753 / 0.014526 (0.098227) | 0.121003 / 0.176557 (-0.055554) | 0.181085 / 0.737135 (-0.556050) | 0.127211 / 0.296338 (-0.169127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400803 / 0.215209 (0.185594) | 4.007315 / 2.077655 (1.929660) | 1.826911 / 1.504120 (0.322791) | 1.637799 / 1.541195 (0.096605) | 1.699754 / 1.468490 (0.231264) | 0.709413 / 4.584777 (-3.875364) | 4.008904 / 3.745712 (0.263192) | 3.916540 / 5.269862 (-1.353322) | 1.902102 / 4.565676 (-2.663575) | 0.089048 / 0.424275 (-0.335227) | 0.012763 / 0.007607 (0.005155) | 0.498957 / 0.226044 (0.272913) | 4.979865 / 2.268929 (2.710937) | 2.301987 / 55.444624 (-53.142637) | 1.929404 / 6.876477 (-4.947073) | 2.107839 / 2.142072 (-0.034233) | 0.857253 / 4.805227 (-3.947974) | 0.171935 / 6.500664 (-6.328729) | 0.066753 / 0.075469 (-0.008716) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.186811 / 1.841788 (-0.654977) | 15.866319 / 8.074308 (7.792011) | 14.738555 / 10.191392 (4.547163) | 0.142879 / 0.680424 (-0.537544) | 0.017679 / 0.534201 (-0.516522) | 0.422840 / 0.579283 (-0.156443) | 0.450307 / 0.434364 (0.015943) | 0.491802 / 0.540337 (-0.048536) | 0.588837 / 1.386936 (-0.798099) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007659 / 0.011353 (-0.003694) | 0.005331 / 0.011008 (-0.005678) | 0.075360 / 0.038508 (0.036852) | 0.034011 / 0.023109 (0.010902) | 0.354488 / 0.275898 (0.078590) | 0.401781 / 0.323480 (0.078301) | 0.005806 / 0.007986 (-0.002179) | 0.004029 / 0.004328 (-0.000300) | 0.073822 / 0.004250 (0.069572) | 0.049067 / 0.037052 (0.012015) | 0.364483 / 0.258489 (0.105994) | 0.405637 / 0.293841 (0.111796) | 0.037166 / 0.128546 (-0.091380) | 0.012397 / 0.075646 (-0.063249) | 0.087346 / 0.419271 (-0.331926) | 0.050888 / 0.043533 (0.007355) | 0.334796 / 0.255139 (0.079657) | 0.387681 / 0.283200 (0.104481) | 0.105056 / 0.141683 (-0.036627) | 1.471630 / 1.452155 (0.019475) | 1.554764 / 1.492716 (0.062047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231825 / 0.018006 (0.213819) | 0.449746 / 0.000490 (0.449256) | 0.000888 / 0.000200 (0.000688) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030363 / 0.037411 (-0.007049) | 0.115234 / 0.014526 (0.100708) | 0.123005 / 0.176557 (-0.053551) | 0.172772 / 0.737135 (-0.564363) | 0.127818 / 0.296338 (-0.168520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425761 / 0.215209 (0.210552) | 4.237950 / 2.077655 (2.160295) | 1.992045 / 1.504120 (0.487925) | 1.801622 / 1.541195 (0.260427) | 1.918477 / 1.468490 (0.449987) | 0.722730 / 4.584777 (-3.862047) | 4.015968 / 3.745712 (0.270256) | 3.720412 / 5.269862 (-1.549450) | 1.763111 / 4.565676 (-2.802566) | 0.089041 / 0.424275 (-0.335234) | 0.012608 / 0.007607 (0.005001) | 0.522645 / 0.226044 (0.296601) | 5.227108 / 2.268929 (2.958180) | 2.444714 / 55.444624 (-52.999910) | 2.109745 / 6.876477 (-4.766732) | 2.194042 / 2.142072 (0.051969) | 0.871781 / 4.805227 (-3.933447) | 0.173149 / 6.500664 (-6.327515) | 0.066192 / 0.075469 (-0.009277) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312051 / 1.841788 (-0.529737) | 16.024315 / 8.074308 (7.950007) | 15.123823 / 10.191392 (4.932431) | 0.163997 / 0.680424 (-0.516427) | 0.017595 / 0.534201 (-0.516606) | 0.426379 / 0.579283 (-0.152904) | 0.467709 / 0.434364 (0.033345) | 0.498308 / 0.540337 (-0.042030) | 0.591426 / 1.386936 (-0.795510) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#13488cc110b67090289794f48d5c84a4fd0c063a \"CML watermark\")\n", "CI is failing due to unrelated issues, hopefully https://github.com/huggingface/datasets/pull/5642 fixes it", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006478 / 0.011353 (-0.004875) | 0.004347 / 0.011008 (-0.006661) | 0.097103 / 0.038508 (0.058595) | 0.027650 / 0.023109 (0.004541) | 0.372355 / 0.275898 (0.096457) | 0.408794 / 0.323480 (0.085314) | 0.005034 / 0.007986 (-0.002952) | 0.003252 / 0.004328 (-0.001076) | 0.074068 / 0.004250 (0.069818) | 0.035542 / 0.037052 (-0.001510) | 0.367392 / 0.258489 (0.108903) | 0.409644 / 0.293841 (0.115803) | 0.031745 / 0.128546 (-0.096801) | 0.011501 / 0.075646 (-0.064145) | 0.323355 / 0.419271 (-0.095917) | 0.043065 / 0.043533 (-0.000467) | 0.377313 / 0.255139 (0.122174) | 0.395326 / 0.283200 (0.112127) | 0.087101 / 0.141683 (-0.054582) | 1.461228 / 1.452155 (0.009073) | 1.529413 / 1.492716 (0.036696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199245 / 0.018006 (0.181239) | 0.409978 / 0.000490 (0.409488) | 0.002655 / 0.000200 (0.002455) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023903 / 0.037411 (-0.013508) | 0.097855 / 0.014526 (0.083330) | 0.106405 / 0.176557 (-0.070152) | 0.166889 / 0.737135 (-0.570247) | 0.110256 / 0.296338 (-0.186082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440351 / 0.215209 (0.225142) | 4.382848 / 2.077655 (2.305194) | 2.049602 / 1.504120 (0.545482) | 1.824638 / 1.541195 (0.283443) | 1.850519 / 1.468490 (0.382029) | 0.702652 / 4.584777 (-3.882125) | 3.394571 / 3.745712 (-0.351141) | 1.940608 / 5.269862 (-3.329254) | 1.263961 / 4.565676 (-3.301716) | 0.083985 / 0.424275 (-0.340290) | 0.013046 / 0.007607 (0.005439) | 0.538272 / 0.226044 (0.312228) | 5.407563 / 2.268929 (3.138634) | 2.519207 / 55.444624 (-52.925418) | 2.153379 / 6.876477 (-4.723098) | 2.394512 / 2.142072 (0.252439) | 0.812840 / 4.805227 (-3.992387) | 0.152868 / 6.500664 (-6.347796) | 0.067823 / 0.075469 (-0.007646) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220031 / 1.841788 (-0.621757) | 13.781237 / 8.074308 (5.706929) | 14.203975 / 10.191392 (4.012583) | 0.141077 / 0.680424 (-0.539347) | 0.016518 / 0.534201 (-0.517682) | 0.379079 / 0.579283 (-0.200204) | 0.378916 / 0.434364 (-0.055448) | 0.434589 / 0.540337 (-0.105749) | 0.521129 / 1.386936 (-0.865807) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006997 / 0.011353 (-0.004356) | 0.004599 / 0.011008 (-0.006410) | 0.078700 / 0.038508 (0.040192) | 0.027902 / 0.023109 (0.004793) | 0.344406 / 0.275898 (0.068508) | 0.392918 / 0.323480 (0.069438) | 0.005175 / 0.007986 (-0.002811) | 0.004755 / 0.004328 (0.000427) | 0.077707 / 0.004250 (0.073457) | 0.039409 / 0.037052 (0.002357) | 0.343250 / 0.258489 (0.084761) | 0.405544 / 0.293841 (0.111703) | 0.032286 / 0.128546 (-0.096260) | 0.011674 / 0.075646 (-0.063972) | 0.087633 / 0.419271 (-0.331639) | 0.043346 / 0.043533 (-0.000186) | 0.355076 / 0.255139 (0.099937) | 0.382155 / 0.283200 (0.098955) | 0.090914 / 0.141683 (-0.050769) | 1.518369 / 1.452155 (0.066215) | 1.583530 / 1.492716 (0.090813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.160369 / 0.018006 (0.142362) | 0.406844 / 0.000490 (0.406354) | 0.002651 / 0.000200 (0.002451) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025295 / 0.037411 (-0.012116) | 0.101490 / 0.014526 (0.086964) | 0.108825 / 0.176557 (-0.067732) | 0.161673 / 0.737135 (-0.575462) | 0.113610 / 0.296338 (-0.182729) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443514 / 0.215209 (0.228305) | 4.436722 / 2.077655 (2.359067) | 2.144008 / 1.504120 (0.639888) | 2.005324 / 1.541195 (0.464129) | 2.123356 / 1.468490 (0.654866) | 0.697217 / 4.584777 (-3.887560) | 3.401105 / 3.745712 (-0.344607) | 1.874621 / 5.269862 (-3.395240) | 1.165069 / 4.565676 (-3.400608) | 0.082799 / 0.424275 (-0.341476) | 0.012806 / 0.007607 (0.005199) | 0.542688 / 0.226044 (0.316644) | 5.420963 / 2.268929 (3.152034) | 2.579034 / 55.444624 (-52.865590) | 2.240201 / 6.876477 (-4.636276) | 2.261309 / 2.142072 (0.119237) | 0.800246 / 4.805227 (-4.004981) | 0.150380 / 6.500664 (-6.350285) | 0.066880 / 0.075469 (-0.008589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281721 / 1.841788 (-0.560067) | 13.906361 / 8.074308 (5.832053) | 14.135336 / 10.191392 (3.943944) | 0.128865 / 0.680424 (-0.551559) | 0.016452 / 0.534201 (-0.517749) | 0.373563 / 0.579283 (-0.205720) | 0.385321 / 0.434364 (-0.049043) | 0.437198 / 0.540337 (-0.103139) | 0.530720 / 1.386936 (-0.856216) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2f8e17f3c8f8d0cb77a4c566a78e31fab47108c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008099 / 0.011353 (-0.003254) | 0.005093 / 0.011008 (-0.005916) | 0.106258 / 0.038508 (0.067750) | 0.037051 / 0.023109 (0.013942) | 0.347960 / 0.275898 (0.072062) | 0.370849 / 0.323480 (0.047369) | 0.006122 / 0.007986 (-0.001863) | 0.004094 / 0.004328 (-0.000235) | 0.079549 / 0.004250 (0.075299) | 0.046563 / 0.037052 (0.009510) | 0.332735 / 0.258489 (0.074246) | 0.417061 / 0.293841 (0.123220) | 0.038105 / 0.128546 (-0.090441) | 0.011886 / 0.075646 (-0.063760) | 0.342103 / 0.419271 (-0.077169) | 0.053233 / 0.043533 (0.009700) | 0.344754 / 0.255139 (0.089615) | 0.355354 / 0.283200 (0.072155) | 0.101059 / 0.141683 (-0.040624) | 1.518561 / 1.452155 (0.066406) | 1.558652 / 1.492716 (0.065935) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225919 / 0.018006 (0.207913) | 0.518539 / 0.000490 (0.518049) | 0.006230 / 0.000200 (0.006030) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026782 / 0.037411 (-0.010629) | 0.108457 / 0.014526 (0.093931) | 0.125203 / 0.176557 (-0.051353) | 0.175726 / 0.737135 (-0.561409) | 0.127051 / 0.296338 (-0.169287) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416427 / 0.215209 (0.201217) | 4.168851 / 2.077655 (2.091196) | 1.962238 / 1.504120 (0.458118) | 1.825224 / 1.541195 (0.284029) | 1.831200 / 1.468490 (0.362710) | 0.765526 / 4.584777 (-3.819250) | 4.303957 / 3.745712 (0.558245) | 2.193467 / 5.269862 (-3.076395) | 1.654605 / 4.565676 (-2.911071) | 0.096709 / 0.424275 (-0.327566) | 0.013792 / 0.007607 (0.006185) | 0.537862 / 0.226044 (0.311818) | 5.152230 / 2.268929 (2.883302) | 2.520938 / 55.444624 (-52.923686) | 2.108422 / 6.876477 (-4.768054) | 2.214220 / 2.142072 (0.072147) | 0.834320 / 4.805227 (-3.970907) | 0.170635 / 6.500664 (-6.330029) | 0.063131 / 0.075469 (-0.012338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215767 / 1.841788 (-0.626020) | 15.254781 / 8.074308 (7.180473) | 14.360764 / 10.191392 (4.169372) | 0.172511 / 0.680424 (-0.507913) | 0.020161 / 0.534201 (-0.514040) | 0.426936 / 0.579283 (-0.152347) | 0.438771 / 0.434364 (0.004407) | 0.486973 / 0.540337 (-0.053364) | 0.584238 / 1.386936 (-0.802698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006777 / 0.011353 (-0.004576) | 0.005304 / 0.011008 (-0.005704) | 0.073717 / 0.038508 (0.035209) | 0.033604 / 0.023109 (0.010494) | 0.340448 / 0.275898 (0.064550) | 0.351861 / 0.323480 (0.028381) | 0.005786 / 0.007986 (-0.002199) | 0.005013 / 0.004328 (0.000685) | 0.071263 / 0.004250 (0.067012) | 0.048189 / 0.037052 (0.011137) | 0.339457 / 0.258489 (0.080968) | 0.384383 / 0.293841 (0.090542) | 0.035563 / 0.128546 (-0.092983) | 0.011509 / 0.075646 (-0.064137) | 0.083722 / 0.419271 (-0.335550) | 0.048886 / 0.043533 (0.005353) | 0.350184 / 0.255139 (0.095045) | 0.361037 / 0.283200 (0.077837) | 0.105191 / 0.141683 (-0.036492) | 1.503247 / 1.452155 (0.051093) | 1.582298 / 1.492716 (0.089581) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221687 / 0.018006 (0.203681) | 0.466489 / 0.000490 (0.465999) | 0.000484 / 0.000200 (0.000284) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027978 / 0.037411 (-0.009434) | 0.119572 / 0.014526 (0.105047) | 0.133530 / 0.176557 (-0.043026) | 0.177892 / 0.737135 (-0.559243) | 0.127045 / 0.296338 (-0.169294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430198 / 0.215209 (0.214989) | 4.435512 / 2.077655 (2.357858) | 2.007183 / 1.504120 (0.503063) | 1.799230 / 1.541195 (0.258036) | 1.884750 / 1.468490 (0.416260) | 0.745232 / 4.584777 (-3.839545) | 4.088069 / 3.745712 (0.342357) | 4.114669 / 5.269862 (-1.155193) | 2.374086 / 4.565676 (-2.191590) | 0.089154 / 0.424275 (-0.335121) | 0.012938 / 0.007607 (0.005331) | 0.505954 / 0.226044 (0.279909) | 5.194226 / 2.268929 (2.925298) | 2.487230 / 55.444624 (-52.957394) | 2.163353 / 6.876477 (-4.713124) | 2.177879 / 2.142072 (0.035807) | 0.828728 / 4.805227 (-3.976499) | 0.171157 / 6.500664 (-6.329507) | 0.062883 / 0.075469 (-0.012586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275906 / 1.841788 (-0.565882) | 15.235484 / 8.074308 (7.161176) | 14.467396 / 10.191392 (4.276004) | 0.198994 / 0.680424 (-0.481430) | 0.020203 / 0.534201 (-0.513998) | 0.447904 / 0.579283 (-0.131380) | 0.454210 / 0.434364 (0.019846) | 0.528062 / 0.540337 (-0.012275) | 0.619311 / 1.386936 (-0.767625) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#11cd0f73acbce1d16174f2555e56fda511d5a08b \"CML watermark\")\n" ]
1,678,898,939,000
1,678,974,457,000
1,678,974,012,000
MEMBER
null
`zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile` This is a known issue: https://github.com/python/cpython/issues/72680 At first I wanted to rely only on magic numbers, but then I found that someone contributed a [fix to is_zipfile](https://github.com/python/cpython/pull/5053) - do you think we should use it @albertvillanova or not ? IMO it's ok to rely on magic numbers only for now, since in streaming mode we've had no issue checking only the magic number so far. Close https://github.com/huggingface/datasets/issues/5639
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5640/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5640", "html_url": "https://github.com/huggingface/datasets/pull/5640", "diff_url": "https://github.com/huggingface/datasets/pull/5640.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5640.patch", "merged_at": "2023-03-16T13:40:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/5639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5639/comments
https://api.github.com/repos/huggingface/datasets/issues/5639/events
https://github.com/huggingface/datasets/issues/5639
1,625,737,098
I_kwDODunzps5g5seK
5,639
Parquet file wrongly recognized as zip prevents loading a dataset
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,678,893,645,000
1,678,974,014,000
1,678,974,014,000
CONTRIBUTOR
null
### Describe the bug When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data/devops-00000-of-00001-22fe902fd8702892.parquet) is wrongly identified by python as being a zip not a parquet. (Full thread on [Slack](https://huggingface.slack.com/archives/C02V51Q3800/p1678890880803599)) ### Steps to reproduce the bug ```python from datasets import load_dataset_builder ds = load_dataset_builder("HuggingFaceGECLM/StackExchange_Mar2023") ``` ### Expected behavior Loading the file normally. ### Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1058-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5639/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5638/comments
https://api.github.com/repos/huggingface/datasets/issues/5638/events
https://github.com/huggingface/datasets/issues/5638
1,625,564,471
I_kwDODunzps5g5CU3
5,638
xPath to implement all operations for Path
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
[ " I think https://github.com/fsspec/universal_pathlib is the project you are looking for.\r\n\r\n`xPath` has the methods often used in dataset scripts, and `mkdir` is not one of them (`dl_manager`'s role is to \"interact\" with the file system, so using `mkdir` is discouraged).", "Right is there a difference between UPath and xPath? Typically is xPath less well implemented compared to Upath, ie missing some implementations of some methods? Or are there methods in xPath that are not implemented with UPath?", "`xPath` is an internal component (it doesn't have a leading underscore in the name, but it should) not meant to be used outside of `datasets`, and it's only tested on HTTP URLs, not S3.\r\n\r\n", "Okay I understand that xPath won't support my usecase. What I was perhaps getting to is why not use UPath in `datasets` instead of `xPath` if UPath seems to have strictly more robust implementations.", "It seems like `universal_pathlib` does not support `fsspec` URL chaining (`::` is the chaining symbol) and \"compression\" filesystems (e.g., `zip`), but this is what we need to access and stream files from within an archive (e.g., we want to stream URLs such as this one: `zip://data.parquet::https://www.dummyurl.com/archive.zip`)" ]
1,678,888,031,000
1,679,059,272,000
1,679,059,272,000
MEMBER
null
### Feature request Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally. ### Motivation I'm using xPath to interact with remote objects. ### Your contribution I could try to make a PR. I'm a bit unfamiliar with chaining right now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5638/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5638/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5637/comments
https://api.github.com/repos/huggingface/datasets/issues/5637/events
https://github.com/huggingface/datasets/issues/5637
1,625,295,691
I_kwDODunzps5g4AtL
5,637
IterableDataset with_format does not support 'device' keyword for jax
{ "login": "Lime-Cakes", "id": 91322985, "node_id": "MDQ6VXNlcjkxMzIyOTg1", "avatar_url": "https://avatars.githubusercontent.com/u/91322985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lime-Cakes", "html_url": "https://github.com/Lime-Cakes", "followers_url": "https://api.github.com/users/Lime-Cakes/followers", "following_url": "https://api.github.com/users/Lime-Cakes/following{/other_user}", "gists_url": "https://api.github.com/users/Lime-Cakes/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lime-Cakes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lime-Cakes/subscriptions", "organizations_url": "https://api.github.com/users/Lime-Cakes/orgs", "repos_url": "https://api.github.com/users/Lime-Cakes/repos", "events_url": "https://api.github.com/users/Lime-Cakes/events{/privacy}", "received_events_url": "https://api.github.com/users/Lime-Cakes/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi! Yes, only `torch` is currently supported. Unlike `Dataset`, `IterableDataset` is not PyArrow-backed, so we cannot simply call `to_numpy` on the underlying subtables to format them numerically. Instead, we must manually convert examples to (numeric) arrays while preserving consistency with `Dataset`, which is not trivial, so this is still a to-do.", "Any plans to support it in the future? Or would streaming dataset be left without support for jax and tensorflow?" ]
1,678,878,252,000
1,678,991,459,000
null
NONE
null
### Describe the bug As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'device'` Looking over the code, it seems IterableDataset support only pytorch and no support for jax device keyword? https://github.com/huggingface/datasets/blob/fc5c84f36684343bff3e424cb0fd1ac5ecdd66da/src/datasets/iterable_dataset.py#L1029 ### Steps to reproduce the bug 1. Load an IterableDataset (tested in streaming mode) 2. Call with_format('jax',device=device) ### Expected behavior I expect to call `with_format('jax', device=device)` as per [documentation](https://huggingface.co/docs/datasets/use_with_jax) without error ### Environment info Tested with installing newest (dev) and also pip release (2.10.1). - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.12.1 - PyArrow version: 11.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5637/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5636/comments
https://api.github.com/repos/huggingface/datasets/issues/5636/events
https://github.com/huggingface/datasets/pull/5636
1,623,721,577
PR_kwDODunzps5MAunR
5,636
Fix CI: ignore C901 ("some_func" is to complex) in `ruff`
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006529 / 0.011353 (-0.004824) | 0.004527 / 0.011008 (-0.006481) | 0.098051 / 0.038508 (0.059543) | 0.028058 / 0.023109 (0.004949) | 0.368543 / 0.275898 (0.092645) | 0.397126 / 0.323480 (0.073646) | 0.005072 / 0.007986 (-0.002913) | 0.003377 / 0.004328 (-0.000952) | 0.076867 / 0.004250 (0.072617) | 0.040121 / 0.037052 (0.003069) | 0.373422 / 0.258489 (0.114933) | 0.403969 / 0.293841 (0.110128) | 0.031485 / 0.128546 (-0.097061) | 0.011673 / 0.075646 (-0.063973) | 0.321837 / 0.419271 (-0.097434) | 0.042828 / 0.043533 (-0.000704) | 0.370391 / 0.255139 (0.115252) | 0.391737 / 0.283200 (0.108538) | 0.084764 / 0.141683 (-0.056919) | 1.463114 / 1.452155 (0.010959) | 1.527042 / 1.492716 (0.034325) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200964 / 0.018006 (0.182958) | 0.403967 / 0.000490 (0.403477) | 0.002439 / 0.000200 (0.002239) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023531 / 0.037411 (-0.013880) | 0.097424 / 0.014526 (0.082899) | 0.104854 / 0.176557 (-0.071703) | 0.165682 / 0.737135 (-0.571453) | 0.109416 / 0.296338 (-0.186922) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431041 / 0.215209 (0.215832) | 4.326039 / 2.077655 (2.248384) | 2.085123 / 1.504120 (0.581003) | 1.922720 / 1.541195 (0.381525) | 2.006608 / 1.468490 (0.538118) | 0.703348 / 4.584777 (-3.881428) | 3.441516 / 3.745712 (-0.304196) | 1.875244 / 5.269862 (-3.394618) | 1.181341 / 4.565676 (-3.384336) | 0.083442 / 0.424275 (-0.340833) | 0.012966 / 0.007607 (0.005359) | 0.536047 / 0.226044 (0.310002) | 5.354856 / 2.268929 (3.085927) | 2.451064 / 55.444624 (-52.993560) | 2.076110 / 6.876477 (-4.800367) | 2.196507 / 2.142072 (0.054435) | 0.811196 / 4.805227 (-3.994032) | 0.152547 / 6.500664 (-6.348118) | 0.067978 / 0.075469 (-0.007491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196169 / 1.841788 (-0.645618) | 13.697234 / 8.074308 (5.622926) | 13.966652 / 10.191392 (3.775260) | 0.143735 / 0.680424 (-0.536688) | 0.016484 / 0.534201 (-0.517717) | 0.382349 / 0.579283 (-0.196934) | 0.401507 / 0.434364 (-0.032857) | 0.447297 / 0.540337 (-0.093041) | 0.529779 / 1.386936 (-0.857157) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006698 / 0.011353 (-0.004655) | 0.004608 / 0.011008 (-0.006400) | 0.076220 / 0.038508 (0.037712) | 0.027340 / 0.023109 (0.004231) | 0.344095 / 0.275898 (0.068197) | 0.374715 / 0.323480 (0.051235) | 0.004883 / 0.007986 (-0.003102) | 0.004658 / 0.004328 (0.000330) | 0.075381 / 0.004250 (0.071130) | 0.036099 / 0.037052 (-0.000953) | 0.340382 / 0.258489 (0.081893) | 0.383488 / 0.293841 (0.089647) | 0.031534 / 0.128546 (-0.097012) | 0.011735 / 0.075646 (-0.063912) | 0.085895 / 0.419271 (-0.333377) | 0.042226 / 0.043533 (-0.001306) | 0.340301 / 0.255139 (0.085162) | 0.366079 / 0.283200 (0.082879) | 0.088828 / 0.141683 (-0.052854) | 1.487880 / 1.452155 (0.035725) | 1.561318 / 1.492716 (0.068601) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226366 / 0.018006 (0.208360) | 0.408934 / 0.000490 (0.408444) | 0.000396 / 0.000200 (0.000196) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024521 / 0.037411 (-0.012891) | 0.100167 / 0.014526 (0.085641) | 0.106480 / 0.176557 (-0.070077) | 0.156377 / 0.737135 (-0.580758) | 0.111709 / 0.296338 (-0.184630) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436138 / 0.215209 (0.220928) | 4.370919 / 2.077655 (2.293265) | 2.066402 / 1.504120 (0.562282) | 1.862157 / 1.541195 (0.320962) | 1.920701 / 1.468490 (0.452211) | 0.695517 / 4.584777 (-3.889260) | 3.435558 / 3.745712 (-0.310154) | 1.864000 / 5.269862 (-3.405861) | 1.164134 / 4.565676 (-3.401543) | 0.083006 / 0.424275 (-0.341269) | 0.012751 / 0.007607 (0.005144) | 0.535405 / 0.226044 (0.309360) | 5.368530 / 2.268929 (3.099602) | 2.494197 / 55.444624 (-52.950427) | 2.161370 / 6.876477 (-4.715107) | 2.180345 / 2.142072 (0.038272) | 0.808076 / 4.805227 (-3.997151) | 0.151891 / 6.500664 (-6.348773) | 0.067643 / 0.075469 (-0.007826) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334245 / 1.841788 (-0.507543) | 14.112805 / 8.074308 (6.038497) | 14.152303 / 10.191392 (3.960911) | 0.153492 / 0.680424 (-0.526932) | 0.016542 / 0.534201 (-0.517659) | 0.376013 / 0.579283 (-0.203270) | 0.386528 / 0.434364 (-0.047836) | 0.436461 / 0.540337 (-0.103876) | 0.519278 / 1.386936 (-0.867658) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce1d1076fc55ac49277398304e551f0b56c3c9e2 \"CML watermark\")\n" ]
1,678,807,751,000
1,678,811,826,000
1,678,811,392,000
CONTRIBUTOR
null
idk if I should have added this ignore to `ruff` too, but I added :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5636/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5636", "html_url": "https://github.com/huggingface/datasets/pull/5636", "diff_url": "https://github.com/huggingface/datasets/pull/5636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5636.patch", "merged_at": "2023-03-14T16:29:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/5635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5635/comments
https://api.github.com/repos/huggingface/datasets/issues/5635/events
https://github.com/huggingface/datasets/pull/5635
1,623,682,558
PR_kwDODunzps5MAmLU
5,635
Pass custom metadata filename to Image/Audio folders
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5635). All of your documentation changes will be reflected on that endpoint.", "I'm not a big fan of this new param - I find assigning metadata files to splits via the `data_files` param cleaner. Also, assuming that the metadata filename is `metadata.json`/`metadata.csv` (I don't think we should allow other names), a user can do `load_dataset(\"imagefolder\", data_dir=\"data\")` to load a dataset with that structure.", "@mariosasko I don't really like this change in it's current state either but passing specific files with `data_files` also looks not quite user-friendly to me. The idea of providing specific parameter for metadata filename seems natural to me but I don't see a way for implementing it without some ugly changes in `load.py` (passing the param to factories and creating metadata patterns on the fly). Why don't you like this parameter?\r\n\r\nFor context: this PR emerged from the case where users wanted to use different metadata files with the same large set of images without copying directories on disk and it's not possible with `data_files` approach.\r\n\r\nedit: ah no, it's possible if one puts metadata files in different subdirs (so that the filenames can be left the same)", ">For context: this PR emerged from the case where users wanted to use different metadata files with the same large set of images without copying directories on disk and it's not possible with data_files approach.\r\n>\r\n>edit: ah no, it's possible if one puts metadata files in different subdirs (so that the filenames can be left the same)\r\n\r\nSeems low prio, but one way to address this would be by allowing to pass \"exclude patterns\" to `data_files`" ]
1,678,806,496,000
1,679,507,431,000
null
CONTRIBUTOR
null
This is a quick fix. Now it requires to pass data via `data_files` parameters and include a required metadata file there and pass its filename as `metadata_filename` parameter. For example, with the structure like: ``` data images_dir/ im1.jpg im2.jpg ... metadata_dir/ meta_file1.jsonl meta_file2.jsonl ... ``` to load data with `metadata_file1.jsonl` do: ```python ds = load_dataset("imagefolder", data_files=["data/images_dir/**", "data/metadata_dir/meta_file1.jsonl"], metadata_filename="meta_file1.jsonl") ``` Note that if you have multiple splits, metadata file should be specified in each of them in `data_files`, smth like: ```python data_files={ "train": ["data/train/**", "data/metadata_dir/meta_file1.jsonl"], "test": ["data/train/**", "data/metadata_dir/meta_file1.jsonl"] } ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5635/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5635/timeline
null
null
1
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5635", "html_url": "https://github.com/huggingface/datasets/pull/5635", "diff_url": "https://github.com/huggingface/datasets/pull/5635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5635.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5634/comments
https://api.github.com/repos/huggingface/datasets/issues/5634/events
https://github.com/huggingface/datasets/issues/5634
1,622,424,174
I_kwDODunzps5gtDpu
5,634
Not all progress bars are showing up when they should for downloading dataset
{ "login": "garlandz-db", "id": 110427462, "node_id": "U_kgDOBpT9Rg", "avatar_url": "https://avatars.githubusercontent.com/u/110427462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/garlandz-db", "html_url": "https://github.com/garlandz-db", "followers_url": "https://api.github.com/users/garlandz-db/followers", "following_url": "https://api.github.com/users/garlandz-db/following{/other_user}", "gists_url": "https://api.github.com/users/garlandz-db/gists{/gist_id}", "starred_url": "https://api.github.com/users/garlandz-db/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/garlandz-db/subscriptions", "organizations_url": "https://api.github.com/users/garlandz-db/orgs", "repos_url": "https://api.github.com/users/garlandz-db/repos", "events_url": "https://api.github.com/users/garlandz-db/events{/privacy}", "received_events_url": "https://api.github.com/users/garlandz-db/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi! \r\n\r\nBy default, tqdm has `leave=True` to \"keep all traces of the progress bar upon the termination of iteration\". However, we use `leave=False` in some places (as of recently), which removes the bar once the iteration is over.\r\n\r\nI feel like our TQDM bars are noisy, so I think we should always set `leave=False` and also use the `delay` parameter to display progress bars only for tasks that take time (e.g., more than 3s). What do you think about this? Do you find these bars useful (after the dataset generation is over)?\r\n", "Hi sorry for the late update. I think the problem still exists despite the `leave` flag\r\n\r\n<img width=\"1105\" alt=\"image\" src=\"https://user-images.githubusercontent.com/110427462/226501615-5b02fb02-fd5f-4eda-b1f7-a7ed6570892d.png\">\r\n\r\n\r\n```\r\nPackage Version\r\n------------------------ ---------\r\naiofiles 22.1.0\r\naiohttp 3.8.4\r\naiosignal 1.3.1\r\naiosqlite 0.18.0\r\nanyio 3.6.2\r\nappnope 0.1.3\r\nargon2-cffi 21.3.0\r\nargon2-cffi-bindings 21.2.0\r\narrow 1.2.3\r\nasttokens 2.2.1\r\nasync-generator 1.10\r\nasync-timeout 4.0.2\r\nattrs 22.2.0\r\nBabel 2.12.1\r\nbackcall 0.2.0\r\nbeautifulsoup4 4.11.2\r\nbleach 6.0.0\r\nbrotlipy 0.7.0\r\ncertifi 2022.12.7\r\ncffi 1.15.1\r\ncfgv 3.3.1\r\ncharset-normalizer 2.1.1\r\ncomm 0.1.2\r\nconda 22.9.0\r\nconda-package-handling 2.0.2\r\nconda_package_streaming 0.7.0\r\ncoverage 7.2.1\r\ncryptography 38.0.4\r\ndatasets 2.8.0\r\ndebugpy 1.6.6\r\ndecorator 5.1.1\r\ndefusedxml 0.7.1\r\ndill 0.3.6\r\ndistlib 0.3.6\r\ndistro 1.4.0\r\nentrypoints 0.4\r\nexceptiongroup 1.1.0\r\nexecuting 1.2.0\r\nfastjsonschema 2.16.3\r\nfilelock 3.9.0\r\nflaky 3.7.0\r\nfqdn 1.5.1\r\nfrozenlist 1.3.3\r\nfsspec 2023.3.0\r\nhuggingface-hub 0.10.1\r\nidentify 2.5.18\r\nidna 3.4\r\niniconfig 2.0.0\r\nipykernel 6.12.1\r\nipyparallel 8.4.1\r\nipython 7.32.0\r\nipython-genutils 0.2.0\r\nipywidgets 8.0.4\r\nisoduration 20.11.0\r\njedi 0.18.2\r\nJinja2 3.1.2\r\njson5 0.9.11\r\njsonpointer 2.3\r\njsonschema 4.17.3\r\njupyter_client 8.0.3\r\njupyter_core 5.2.0\r\njupyter-events 0.6.3\r\njupyter_server 2.4.0\r\njupyter_server_fileid 0.8.0\r\njupyter_server_terminals 0.4.4\r\njupyter_server_ydoc 0.6.1\r\njupyter-ydoc 0.2.2\r\njupyterlab 3.6.1\r\njupyterlab-pygments 0.2.2\r\njupyterlab_server 2.20.0\r\njupyterlab-widgets 3.0.5\r\nlibmambapy 1.1.0\r\nmamba 1.1.0\r\nMarkupSafe 2.1.2\r\nmatplotlib-inline 0.1.6\r\nmistune 2.0.5\r\nmultidict 6.0.4\r\nmultiprocess 0.70.14\r\nnbclassic 0.5.3\r\nnbclient 0.7.2\r\nnbconvert 7.2.9\r\nnbformat 5.7.3\r\nnest-asyncio 1.5.6\r\nnodeenv 1.7.0\r\nnotebook 6.5.3\r\nnotebook_shim 0.2.2\r\nnumpy 1.24.2\r\noutcome 1.2.0\r\npackaging 23.0\r\npandas 1.5.3\r\npandocfilters 1.5.0\r\nparso 0.8.3\r\npexpect 4.8.0\r\npickleshare 0.7.5\r\npip 22.3.1\r\nplatformdirs 3.0.0\r\nplotly 5.13.1\r\npluggy 1.0.0\r\npre-commit 3.1.0\r\nprometheus-client 0.16.0\r\nprompt-toolkit 3.0.38\r\npsutil 5.9.4\r\nptyprocess 0.7.0\r\npure-eval 0.2.2\r\npyarrow 11.0.0\r\npycosat 0.6.4\r\npycparser 2.21\r\nPygments 2.14.0\r\npyOpenSSL 22.1.0\r\npyrsistent 0.19.3\r\nPySocks 1.7.1\r\npytest 7.2.1\r\npytest-asyncio 0.20.3\r\npytest-cov 4.0.0\r\npytest-timeout 2.1.0\r\npython-dateutil 2.8.2\r\npython-json-logger 2.0.7\r\npytz 2022.7.1\r\nPyYAML 6.0\r\npyzmq 25.0.0\r\nrequests 2.28.1\r\nresponses 0.18.0\r\nrfc3339-validator 0.1.4\r\nrfc3986-validator 0.1.1\r\nruamel-yaml-conda 0.15.80\r\nSend2Trash 1.8.0\r\nsetuptools 65.6.3\r\nsimplegeneric 0.8.1\r\nsix 1.16.0\r\nsniffio 1.3.0\r\nsortedcontainers 2.4.0\r\nsoupsieve 2.4\r\nstack-data 0.6.2\r\ntenacity 8.2.2\r\nterminado 0.17.1\r\ntinycss2 1.2.1\r\ntomli 2.0.1\r\ntoolz 0.12.0\r\ntornado 6.2\r\ntqdm 4.65.0\r\ntraitlets 5.8.1\r\ntrio 0.22.0\r\ntyping_extensions 4.5.0\r\nuri-template 1.2.0\r\nurllib3 1.26.13\r\nvirtualenv 20.19.0\r\nwcwidth 0.2.6\r\nwebcolors 1.12\r\nwebencodings 0.5.1\r\nwebsocket-client 1.5.1\r\nwheel 0.38.4\r\nwidgetsnbextension 4.0.5\r\nxxhash 3.2.0\r\ny-py 0.5.9\r\nyarl 1.8.2\r\nypy-websocket 0.8.2\r\nzstandard 0.19.0\r\n```\r\n\r\nAny idea why this is happening? I debugged this to know the tqdm.pbar value is not being updated properly and its not the kernel not sending the comm messages to the IProgress bar" ]
1,678,748,658,000
1,679,363,999,000
null
NONE
null
### Describe the bug During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too. ipywidgets <img width="1243" alt="image" src="https://user-images.githubusercontent.com/110427462/224851138-13fee5b7-ab51-4883-b96f-1b9808782e3b.png"> tqdm <img width="1251" alt="Screen Shot 2023-03-13 at 3 58 59 PM" src="https://user-images.githubusercontent.com/110427462/224851180-5feb7825-9250-4b1e-ad0c-f3172ac1eb78.png"> ### Steps to reproduce the bug 1. Run this line ``` from datasets import load_dataset rotten_tomatoes = load_dataset("rotten_tomatoes", split="train") ``` ### Expected behavior all progress bars for builder script, metadata, readme, training, validation, and test set ### Environment info requirements.txt ``` aiofiles==22.1.0 aiohttp==3.8.4 aiosignal==1.3.1 aiosqlite==0.18.0 anyio==3.6.2 appnope==0.1.3 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 asttokens==2.2.1 async-generator==1.10 async-timeout==4.0.2 attrs==22.2.0 Babel==2.12.1 backcall==0.2.0 beautifulsoup4==4.11.2 bleach==6.0.0 brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1666764961872/work certifi==2022.12.7 cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1671179414629/work cfgv==3.3.1 charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work comm==0.1.2 conda==22.9.0 conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1669907009957/work conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1669733752472/work coverage==7.2.1 cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1669592251328/work datasets==2.1.0 debugpy==1.6.6 decorator==5.1.1 defusedxml==0.7.1 dill==0.3.6 distlib==0.3.6 distro==1.4.0 entrypoints==0.4 exceptiongroup==1.1.0 executing==1.2.0 fastjsonschema==2.16.3 filelock==3.9.0 flaky==3.7.0 fqdn==1.5.1 frozenlist==1.3.3 fsspec==2023.3.0 huggingface-hub==0.10.1 identify==2.5.18 idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work iniconfig==2.0.0 ipykernel==6.12.1 ipyparallel==8.4.1 ipython==7.32.0 ipython-genutils==0.2.0 ipywidgets==8.0.4 isoduration==20.11.0 jedi==0.18.2 Jinja2==3.1.2 json5==0.9.11 jsonpointer==2.3 jsonschema==4.17.3 jupyter-events==0.6.3 jupyter-ydoc==0.2.2 jupyter_client==8.0.3 jupyter_core==5.2.0 jupyter_server==2.4.0 jupyter_server_fileid==0.8.0 jupyter_server_terminals==0.4.4 jupyter_server_ydoc==0.6.1 jupyterlab==3.6.1 jupyterlab-pygments==0.2.2 jupyterlab-widgets==3.0.5 jupyterlab_server==2.20.0 libmambapy @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/libmambapy mamba @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/mamba MarkupSafe==2.1.2 matplotlib-inline==0.1.6 mistune==2.0.5 multidict==6.0.4 multiprocess==0.70.14 nbclassic==0.5.3 nbclient==0.7.2 nbconvert==7.2.9 nbformat==5.7.3 nest-asyncio==1.5.6 nodeenv==1.7.0 notebook==6.5.3 notebook_shim==0.2.2 numpy==1.24.2 outcome==1.2.0 packaging==23.0 pandas==1.5.3 pandocfilters==1.5.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 platformdirs==3.0.0 plotly==5.13.1 pluggy==1.0.0 pre-commit==3.1.0 prometheus-client==0.16.0 prompt-toolkit==3.0.38 psutil==5.9.4 ptyprocess==0.7.0 pure-eval==0.2.2 pyarrow==11.0.0 pycosat @ file:///Users/runner/miniforge3/conda-bld/pycosat_1666836580084/work pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work Pygments==2.14.0 pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1665350324128/work pyrsistent==0.19.3 PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work pytest==7.2.1 pytest-asyncio==0.20.3 pytest-cov==4.0.0 pytest-timeout==2.1.0 python-dateutil==2.8.2 python-json-logger==2.0.7 pytz==2022.7.1 PyYAML==6.0 pyzmq==25.0.0 requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1661872987712/work responses==0.18.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 ruamel-yaml-conda @ file:///Users/runner/miniforge3/conda-bld/ruamel_yaml_1666819760545/work Send2Trash==1.8.0 simplegeneric==0.8.1 six==1.16.0 sniffio==1.3.0 sortedcontainers==2.4.0 soupsieve==2.4 stack-data==0.6.2 tenacity==8.2.2 terminado==0.17.1 tinycss2==1.2.1 tomli==2.0.1 toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work tornado==6.2 tqdm==4.64.1 traitlets==5.8.1 trio==0.22.0 typing_extensions==4.5.0 uri-template==1.2.0 urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1669259737463/work virtualenv==20.19.0 wcwidth==0.2.6 webcolors==1.12 webencodings==0.5.1 websocket-client==1.5.1 widgetsnbextension==4.0.5 xxhash==3.2.0 y-py==0.5.9 yarl==1.8.2 ypy-websocket==0.8.2 zstandard==0.19.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5634/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5633/comments
https://api.github.com/repos/huggingface/datasets/issues/5633/events
https://github.com/huggingface/datasets/issues/5633
1,621,469,970
I_kwDODunzps5gpasS
5,633
Cannot import datasets
{ "login": "eerio", "id": 11250555, "node_id": "MDQ6VXNlcjExMjUwNTU1", "avatar_url": "https://avatars.githubusercontent.com/u/11250555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eerio", "html_url": "https://github.com/eerio", "followers_url": "https://api.github.com/users/eerio/followers", "following_url": "https://api.github.com/users/eerio/following{/other_user}", "gists_url": "https://api.github.com/users/eerio/gists{/gist_id}", "starred_url": "https://api.github.com/users/eerio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eerio/subscriptions", "organizations_url": "https://api.github.com/users/eerio/orgs", "repos_url": "https://api.github.com/users/eerio/repos", "events_url": "https://api.github.com/users/eerio/events{/privacy}", "received_events_url": "https://api.github.com/users/eerio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Okay, the issue was likely caused by mixing `conda` and `pip` usage - I forgot that I have already used `pip` in this environment previously and that it was 'spoiled' because of it. Creating another environment and installing `datasets` by pip with other packages from the `requirements.txt` file solved the problem." ]
1,678,713,284,000
1,678,730,059,000
1,678,730,059,000
NONE
null
### Describe the bug Hi, I cannot even import the library :( I installed it by running: ``` $ conda install datasets ``` Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran: ``` $ conda remove datasets $ conda install -c huggingface datasets ``` Please see 'steps to reproduce the bug' for the specific error, as steps to reproduce is just importing the library ### Steps to reproduce the bug ``` $ python3 Python 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 59, in <module> from .arrow_reader import ArrowReader File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_reader.py", line 27, in <module> import pyarrow.parquet as pq File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/__init__.py", line 20, in <module> from .core import * File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/core.py", line 37, in <module> from pyarrow._parquet import (ParquetReader, Statistics, # noqa ImportError: cannot import name 'FileEncryptionProperties' from 'pyarrow._parquet' (/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/_parquet.cpython-38-x86_64-linux-gnu.so) ``` ### Expected behavior I would expect for the statement `import datasets` to cause no error ### Environment info Output of `conda list`: ``` # packages in environment at /home/jack/.conda/envs/pbalawender_zpp: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu abseil-cpp 20210324.2 h2531618_0 advertools 0.13.2 pypi_0 pypi aiofiles 0.8.0 pypi_0 pypi aiohttp 3.8.3 py38h5eee18b_0 aiosignal 1.2.0 pyhd3eb1b0_0 aiosqlite 0.17.0 pypi_0 pypi anyio 3.6.2 pypi_0 pypi aquirdturtle-collapsible-headings 3.1.0 pypi_0 pypi argon2-cffi 21.3.0 pypi_0 pypi argon2-cffi-bindings 21.2.0 pypi_0 pypi arrow 1.2.3 pypi_0 pypi arrow-cpp 3.0.0 py38h6b21186_4 asttokens 2.2.0 pypi_0 pypi async-timeout 4.0.2 py38h06a4308_0 attrs 22.1.0 py38h06a4308_0 automat 22.10.0 pypi_0 pypi aws-c-common 0.4.57 he6710b0_1 aws-c-event-stream 0.1.6 h2531618_5 aws-checksums 0.1.9 he6710b0_0 aws-sdk-cpp 1.8.185 hce553d0_0 babel 2.11.0 pypi_0 pypi backcall 0.2.0 pyhd3eb1b0_0 beautifulsoup4 4.11.1 pypi_0 pypi blas 1.0 mkl bleach 5.0.1 pypi_0 pypi boost-cpp 1.73.0 h27cfd23_11 bottleneck 1.3.5 py38h7deecbd_0 brotli 1.0.9 h5eee18b_7 brotli-bin 1.0.9 h5eee18b_7 brotlipy 0.7.0 py38h27cfd23_1003 bzip2 1.0.8 h7b6447c_0 c-ares 1.18.1 h7f8727e_0 ca-certificates 2023.01.10 h06a4308_0 certifi 2022.9.24 pypi_0 pypi cffi 1.15.1 py38h5eee18b_3 charset-normalizer 2.1.1 pypi_0 pypi click 8.1.3 pypi_0 pypi constantly 15.1.0 pypi_0 pypi contourpy 1.0.6 pypi_0 pypi cryptography 38.0.4 pypi_0 pypi cssselect 1.2.0 pypi_0 pypi cudatoolkit 10.1.243 h8cb64d8_10 conda-forge cycler 0.11.0 pypi_0 pypi dacite 1.6.0 pypi_0 pypi dataclasses 0.8 pyh6d0b6a4_7 datasets 1.18.4 py_0 huggingface datetime 4.7 pypi_0 pypi debugpy 1.6.4 pypi_0 pypi decorator 5.1.1 pyhd3eb1b0_0 defusedxml 0.7.1 pypi_0 pypi dill 0.3.6 py38h06a4308_0 docker-pycreds 0.4.0 pypi_0 pypi double-conversion 3.1.5 he6710b0_1 entrypoints 0.4 py38h06a4308_0 executing 0.8.3 pyhd3eb1b0_0 filelock 3.8.0 pypi_0 pypi flake8 6.0.0 pypi_0 pypi flask 2.1.3 py38h06a4308_0 flit-core 3.6.0 pyhd3eb1b0_0 fonttools 4.38.0 pypi_0 pypi fqdn 1.5.1 pypi_0 pypi freetype 2.12.1 h4a9f257_0 frozenlist 1.3.3 py38h5eee18b_0 fsspec 2022.11.0 py38h06a4308_0 gensim 4.2.0 pypi_0 pypi gflags 2.2.2 he6710b0_0 giflib 5.2.1 h5eee18b_3 gitdb 4.0.10 pypi_0 pypi gitpython 3.1.30 pypi_0 pypi glog 0.5.0 h2531618_0 grpc-cpp 1.39.0 hae934f6_5 huggingface-hub 0.11.1 pypi_0 pypi huggingface_hub 0.13.1 py_0 huggingface hyperlink 21.0.0 pypi_0 pypi icu 58.2 he6710b0_3 idna 3.4 py38h06a4308_0 importlib-metadata 5.1.0 pypi_0 pypi importlib_metadata 4.11.3 hd3eb1b0_0 importlib_resources 5.2.0 pyhd3eb1b0_1 incremental 22.10.0 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 ipykernel 6.17.1 pyh210e3f2_0 conda-forge ipython 8.7.0 pypi_0 pypi ipython-genutils 0.2.0 pypi_0 pypi ipywidgets 8.0.2 pyhd8ed1ab_1 conda-forge isoduration 20.11.0 pypi_0 pypi itemadapter 0.7.0 pypi_0 pypi itemloaders 1.0.6 pypi_0 pypi itsdangerous 2.0.1 pyhd3eb1b0_0 jedi 0.18.2 pypi_0 pypi jinja2 3.1.2 py38h06a4308_0 jmespath 1.0.1 pypi_0 pypi joblib 1.2.0 pypi_0 pypi jpeg 9b h024ee3a_2 json5 0.9.10 pypi_0 pypi jsonpickle 3.0.0 pypi_0 pypi jsonpointer 2.3 pypi_0 pypi jsonschema 4.17.3 py38h06a4308_0 jupyter-core 5.1.0 pypi_0 pypi jupyter-events 0.5.0 pypi_0 pypi jupyter-server 1.23.3 pypi_0 pypi jupyter-server-fileid 0.6.0 pypi_0 pypi jupyter-server-ydoc 0.4.0 pypi_0 pypi jupyter-ydoc 0.2.2 pypi_0 pypi jupyter_client 7.4.9 py38h06a4308_0 jupyter_core 5.2.0 py38h06a4308_0 jupyterlab 3.6.0a4 pypi_0 pypi jupyterlab-pygments 0.2.2 pypi_0 pypi jupyterlab-server 2.16.3 pypi_0 pypi jupyterlab_widgets 3.0.3 pyhd8ed1ab_0 conda-forge kiwisolver 1.4.4 pypi_0 pypi krb5 1.19.4 h568e23c_0 lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.38 h1181459_1 libboost 1.73.0 h3ff78a5_11 libbrotlicommon 1.0.9 h5eee18b_7 libbrotlidec 1.0.9 h5eee18b_7 libbrotlienc 1.0.9 h5eee18b_7 libcurl 7.88.1 h91b91d3_0 libedit 3.1.20221030 h5eee18b_0 libev 4.33 h7f8727e_1 libevent 2.1.12 h8f2d780_0 libffi 3.4.2 h6a678d5_6 libgcc-ng 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libnghttp2 1.46.0 hce63b2e_0 libpng 1.6.39 h5eee18b_0 libprotobuf 3.17.2 h4ff587b_1 libsodium 1.0.18 h7b6447c_0 libssh2 1.10.0 h8f2d780_0 libstdcxx-ng 11.2.0 h1234567_1 libthrift 0.14.2 hcc01f38_0 libtiff 4.1.0 h2733197_1 libuv 1.44.2 h5eee18b_0 libwebp 1.2.0 h89dd481_0 lz4-c 1.9.4 h6a678d5_0 markupsafe 2.1.1 py38h7f8727e_0 matplotlib 3.6.2 pypi_0 pypi matplotlib-inline 0.1.6 py38h06a4308_0 mccabe 0.7.0 pypi_0 pypi mistune 2.0.4 pypi_0 pypi mkl 2021.4.0 h06a4308_640 mkl-service 2.4.0 py38h7f8727e_0 mkl_fft 1.3.1 py38hd3c417c_0 mkl_random 1.2.2 py38h51133e4_0 morfeusz2 1.99.6 pypi_0 pypi multidict 6.0.2 py38h5eee18b_0 multiprocess 0.70.14 py38h06a4308_0 nbclassic 0.4.8 pypi_0 pypi nbclient 0.7.2 pypi_0 pypi nbconvert 7.2.5 pypi_0 pypi nbformat 5.7.0 py38h06a4308_0 ncurses 6.4 h6a678d5_0 nest-asyncio 1.5.6 py38h06a4308_0 ninja 1.10.2 h06a4308_5 ninja-base 1.10.2 hd09550d_5 notebook 6.5.2 pypi_0 pypi notebook-shim 0.2.2 pypi_0 pypi numexpr 2.8.4 py38he184ba9_0 numpy 1.23.5 py38h14f4228_0 numpy-base 1.23.5 py38h31eccc5_0 oauthlib 3.2.2 pypi_0 pypi opencv-python 4.6.0.66 pypi_0 pypi openssl 1.1.1t h7f8727e_0 orc 1.6.9 ha97a36c_3 packaging 22.0 py38h06a4308_0 pandas 1.5.2 pypi_0 pypi pandocfilters 1.5.0 pypi_0 pypi parsel 1.7.0 pypi_0 pypi parso 0.8.3 pyhd3eb1b0_0 pathlib 1.0.1 pypi_0 pypi pathtools 0.1.2 pypi_0 pypi pexpect 4.8.0 pyhd3eb1b0_3 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 9.3.0 pypi_0 pypi pip 22.2.2 py38h06a4308_0 pkgutil-resolve-name 1.3.10 py38h06a4308_0 platformdirs 2.5.4 pypi_0 pypi prometheus-client 0.15.0 pypi_0 pypi promise 2.3 pypi_0 pypi prompt-toolkit 3.0.33 pypi_0 pypi protego 0.2.1 pypi_0 pypi protobuf 4.21.12 pypi_0 pypi psutil 5.9.0 py38h5eee18b_0 ptyprocess 0.7.0 pyhd3eb1b0_2 pure_eval 0.2.2 pyhd3eb1b0_0 pyarrow 10.0.1 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycodestyle 2.10.0 pypi_0 pypi pycparser 2.21 pyhd3eb1b0_0 pydispatcher 2.0.6 pypi_0 pypi pyflakes 3.0.1 pypi_0 pypi pygments 2.11.2 pyhd3eb1b0_0 pyopenssl 22.1.0 pypi_0 pypi pyrsistent 0.18.0 py38heee7806_0 pysocks 1.7.1 py38h06a4308_0 python 3.8.15 h7a1cb2a_2 python-dateutil 2.8.2 pyhd3eb1b0_0 python-dotenv 0.21.0 pypi_0 pypi python-fastjsonschema 2.16.2 py38h06a4308_0 python-json-logger 2.0.4 pypi_0 pypi python-xxhash 2.0.2 py38h5eee18b_1 pytorch 1.7.1 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch pytz 2022.6 pypi_0 pypi pyyaml 6.0 py38h5eee18b_1 pyzmq 23.2.0 py38h6a678d5_0 queuelib 1.6.2 pypi_0 pypi re2 2022.04.01 h295c915_0 readline 8.2 h5eee18b_0 regex 2022.10.31 pypi_0 pypi requests 2.28.1 py38h06a4308_0 requests-file 1.5.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rfc3339-validator 0.1.4 pypi_0 pypi rfc3986-validator 0.1.1 pypi_0 pypi scikit-learn 1.1.3 pypi_0 pypi scipy 1.9.3 pypi_0 pypi scrapy 2.7.1 pypi_0 pypi seaborn 0.12.1 pypi_0 pypi send2trash 1.8.0 pypi_0 pypi sentry-sdk 1.12.1 pypi_0 pypi service-identity 21.1.0 pypi_0 pypi setproctitle 1.3.2 pypi_0 pypi setuptools 65.6.3 pypi_0 pypi shortuuid 1.0.11 pypi_0 pypi six 1.16.0 pyhd3eb1b0_1 smart-open 6.2.0 pypi_0 pypi smmap 5.0.0 pypi_0 pypi snappy 1.1.9 h295c915_0 sniffio 1.3.0 pypi_0 pypi soupsieve 2.3.2.post1 pypi_0 pypi sqlite 3.40.1 h5082296_0 stack-data 0.6.2 pypi_0 pypi stack_data 0.2.0 pyhd3eb1b0_0 terminado 0.17.0 pypi_0 pypi threadpoolctl 3.1.0 pypi_0 pypi tinycss2 1.2.1 pypi_0 pypi tk 8.6.12 h1ccaba5_0 tldextract 3.4.0 pypi_0 pypi tokenizers 0.13.2 pypi_0 pypi tomli 2.0.1 pypi_0 pypi torchvision 0.8.2 py38_cu101 pytorch tornado 6.2 py38h5eee18b_0 tqdm 4.64.1 py38h06a4308_0 traitlets 5.6.0 pypi_0 pypi transformers 4.25.1 pypi_0 pypi tweepy 4.12.1 pypi_0 pypi twisted 22.10.0 pypi_0 pypi twython 3.9.1 pypi_0 pypi typing-extensions 4.4.0 py38h06a4308_0 typing_extensions 4.4.0 py38h06a4308_0 uri-template 1.2.0 pypi_0 pypi uriparser 0.9.3 he6710b0_1 urllib3 1.26.13 pypi_0 pypi utf8proc 2.6.1 h27cfd23_0 w3lib 2.1.0 pypi_0 pypi wandb 0.13.7 pypi_0 pypi wcwidth 0.2.5 pyhd3eb1b0_0 webcolors 1.12 pypi_0 pypi webencodings 0.5.1 pypi_0 pypi websocket-client 1.4.2 pypi_0 pypi werkzeug 2.2.2 py38h06a4308_0 wheel 0.38.4 py38h06a4308_0 widgetsnbextension 4.0.3 py38h06a4308_0 xxhash 0.8.0 h7f8727e_3 xz 5.2.10 h5eee18b_1 y-py 0.5.4 pypi_0 pypi yaml 0.2.5 h7b6447c_0 yarl 1.8.1 py38h5eee18b_0 ypy-websocket 0.5.0 pypi_0 pypi zeromq 4.3.4 h2531618_0 zipp 3.11.0 py38h06a4308_0 zlib 1.2.13 h5eee18b_0 zope-interface 5.5.2 pypi_0 pypi zstd 1.4.9 haebb681_0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5633/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5632/comments
https://api.github.com/repos/huggingface/datasets/issues/5632/events
https://github.com/huggingface/datasets/issues/5632
1,621,177,391
I_kwDODunzps5goTQv
5,632
Dataset cannot convert too large dictionnary
{ "login": "MaraLac", "id": 108518627, "node_id": "U_kgDOBnfc4w", "avatar_url": "https://avatars.githubusercontent.com/u/108518627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MaraLac", "html_url": "https://github.com/MaraLac", "followers_url": "https://api.github.com/users/MaraLac/followers", "following_url": "https://api.github.com/users/MaraLac/following{/other_user}", "gists_url": "https://api.github.com/users/MaraLac/gists{/gist_id}", "starred_url": "https://api.github.com/users/MaraLac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MaraLac/subscriptions", "organizations_url": "https://api.github.com/users/MaraLac/orgs", "repos_url": "https://api.github.com/users/MaraLac/repos", "events_url": "https://api.github.com/users/MaraLac/events{/privacy}", "received_events_url": "https://api.github.com/users/MaraLac/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Answered on the forum:\r\n\r\n> To fix the overflow error, we need to merge [support LargeListArray in pyarrow by xwwwwww Β· Pull Request #4800 Β· huggingface/datasets Β· GitHub](https://github.com/huggingface/datasets/pull/4800), which adds support for the large lists. However, before merging it, we need to come up with a cleaner API for large lists. I hope to find some time to address this before Datasets 3.0." ]
1,678,702,480,000
1,678,980,537,000
null
NONE
null
### Describe the bug Hello everyone! I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})". However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this. Indeed, I can create the dataset until a certain size of my dictionnary, and then I have the error "OverflowError: Python int too large to convert to C long". Do you know how to solve this problem? Unfortunately I cannot give a reproductible code because I cannot share a so large file, but you can find the code below (it's a test on only a part of the validation data ~10Go, but it's already the case). Thank you! ### Steps to reproduce the bug SAVE_DIR = './data/' features = h5py.File(SAVE_DIR+'features.hdf5','r') valid_data = features["validation"]["data/features"] v_array_values = [np.float32(item[()]) for item in valid_data.values()] for i in range(len(v_array_values)): v_array_values[i] = v_array_values[i].round(decimals=5) dict_valid = datasets.Dataset.from_dict({'input_values': v_array_values}) ### Expected behavior The code is expected to give me a Huggingface dataset. ### Environment info python: 3.8.15 numpy: 1.22.3 datasets: 2.3.2 pyarrow: 8.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5632/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5631/comments
https://api.github.com/repos/huggingface/datasets/issues/5631/events
https://github.com/huggingface/datasets/issues/5631
1,620,442,854
I_kwDODunzps5glf7m
5,631
Custom split names
{ "login": "ErfanMoosaviMonazzah", "id": 79091831, "node_id": "MDQ6VXNlcjc5MDkxODMx", "avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ErfanMoosaviMonazzah", "html_url": "https://github.com/ErfanMoosaviMonazzah", "followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers", "following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}", "gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}", "starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions", "organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs", "repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos", "events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}", "received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
[ "Hi!\r\n\r\nYou can also use names other than \"train\", \"validation\" and \"test\". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. " ]
1,678,641,703,000
1,679,667,180,000
1,679,667,180,000
NONE
null
### Feature request Hi, I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (currently i can have more splits when I am loading datasets from urls, but not hub) ### Motivation Easier access to more splits ### Your contribution No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5631/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5630/comments
https://api.github.com/repos/huggingface/datasets/issues/5630/events
https://github.com/huggingface/datasets/pull/5630
1,620,327,510
PR_kwDODunzps5L1ahF
5,630
adds early exit if url is `PathLike`
{ "login": "vvvm23", "id": 44398246, "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vvvm23", "html_url": "https://github.com/vvvm23", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "repos_url": "https://api.github.com/users/vvvm23/repos", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5630). All of your documentation changes will be reflected on that endpoint." ]
1,678,620,208,000
1,678,881,518,000
null
NONE
null
Closes #4864 Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5630/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5630", "html_url": "https://github.com/huggingface/datasets/pull/5630", "diff_url": "https://github.com/huggingface/datasets/pull/5630.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5630.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5629/comments
https://api.github.com/repos/huggingface/datasets/issues/5629/events
https://github.com/huggingface/datasets/issues/5629
1,619,921,247
I_kwDODunzps5gjglf
5,629
load_dataset gives "403" error when using Financial phrasebank
{ "login": "Jimchoo91", "id": 67709789, "node_id": "MDQ6VXNlcjY3NzA5Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/67709789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jimchoo91", "html_url": "https://github.com/Jimchoo91", "followers_url": "https://api.github.com/users/Jimchoo91/followers", "following_url": "https://api.github.com/users/Jimchoo91/following{/other_user}", "gists_url": "https://api.github.com/users/Jimchoo91/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jimchoo91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jimchoo91/subscriptions", "organizations_url": "https://api.github.com/users/Jimchoo91/orgs", "repos_url": "https://api.github.com/users/Jimchoo91/repos", "events_url": "https://api.github.com/users/Jimchoo91/events{/privacy}", "received_events_url": "https://api.github.com/users/Jimchoo91/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi! You seem to be using an outdated version of `datasets` that downloads the older script version. To avoid the error, you can either pass `revision=\"main\"` to `load_dataset` (this can fail if a script uses newer features of the lib) or update your installation with `pip install -U datasets` (better solution)." ]
1,678,520,799,000
1,678,732,046,000
null
NONE
null
When I try to load this dataset, I receive the following error: ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403) Has this been seen before? Thanks. The website loads when I try to access it manually.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5629/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5628/comments
https://api.github.com/repos/huggingface/datasets/issues/5628/events
https://github.com/huggingface/datasets/pull/5628
1,619,641,810
PR_kwDODunzps5LzVKi
5,628
add kwargs to index search
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678,483,498,000
1,678,891,727,000
1,678,891,564,000
CONTRIBUTOR
null
This PR proposes to add kwargs to index search methods. This is particularly useful for setting the timeout of a query on elasticsearch. A typical use case would be: ```python dset.add_elasticsearch_index("filename", es_client=es_client) scores, examples = dset.get_nearest_examples("filename", "my_name-train_29", request_timeout=60) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5628/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5628/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5628", "html_url": "https://github.com/huggingface/datasets/pull/5628", "diff_url": "https://github.com/huggingface/datasets/pull/5628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5628.patch", "merged_at": "2023-03-15T14:46:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/5627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5627/comments
https://api.github.com/repos/huggingface/datasets/issues/5627/events
https://github.com/huggingface/datasets/issues/5627
1,619,336,609
I_kwDODunzps5ghR2h
5,627
Unable to load AutoTrain-generated dataset from the hub
{ "login": "ijmiller2", "id": 8560151, "node_id": "MDQ6VXNlcjg1NjAxNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/8560151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ijmiller2", "html_url": "https://github.com/ijmiller2", "followers_url": "https://api.github.com/users/ijmiller2/followers", "following_url": "https://api.github.com/users/ijmiller2/following{/other_user}", "gists_url": "https://api.github.com/users/ijmiller2/gists{/gist_id}", "starred_url": "https://api.github.com/users/ijmiller2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ijmiller2/subscriptions", "organizations_url": "https://api.github.com/users/ijmiller2/orgs", "repos_url": "https://api.github.com/users/ijmiller2/repos", "events_url": "https://api.github.com/users/ijmiller2/events{/privacy}", "received_events_url": "https://api.github.com/users/ijmiller2/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder", "Okay, good to know. Thanks for the reply. For now I will just have to\nmanage the split manually before training, because I can’t find any way of\npulling out file indices or file names from the autogenerated split. The\nfile names field of the image dataset (loaded directly from arrow file) is\nmissing, just fyi (for anyone else this might be relevant too).\n\nOn Fri, Mar 10, 2023 at 7:02 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The AutoTrain format is not supported right now. I think it would require\n> a dedicated dataset builder\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5627#issuecomment-1464734308>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACBJ4F5A353MCZ76OGRJ6CTW3PFI7ANCNFSM6AAAAAAVWXNUTE>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n" ]
1,678,469,158,000
1,678,549,482,000
null
NONE
null
### Describe the bug DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match ``` ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: list<item: string> child 0, item: string _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: null to {'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}} because column names don't match ``` ### Steps to reproduce the bug Steps to reproduce: 1. `pip install datasets==2.10.1` 2. Attempt to load (private dataset). Note that I'm authenticated via ` huggingface-cli login` ``` from datasets import load_dataset # load dataset dataset = "ijmiller2/autotrain-data-betterbin-vision-10000" dataset = load_dataset(dataset) ``` Here's the full traceback: ```Downloading and preparing dataset json/ijmiller2--autotrain-data-betterbin-vision-10000 to /Users/ian/.cache/huggingface/datasets/ijmiller2___json/ijmiller2--autotrain-data-betterbin-vision-10000-2eae034a9ff8a1a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 2383.80it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 505.95it/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1874, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1868 writer = writer_class( 1869 features=writer._features, 1870 path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"), 1871 storage_options=self._fs.storage_options, 1872 embed_local_files=embed_local_files, 1873 ) -> 1874 writer.write_table(table) 1875 num_examples_progress_update += len(table) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/arrow_writer.py:568, in ArrowWriter.write_table(self, pa_table, writer_batch_size) 567 pa_table = pa_table.combine_chunks() --> 568 pa_table = table_cast(pa_table, self._schema) 569 if self.embed_local_files: File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2312, in table_cast(table, schema) 2311 if table.schema != schema: -> 2312 return cast_table_to_schema(table, schema) 2313 elif table.schema.metadata != schema.metadata: File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2270, in cast_table_to_schema(table, schema) 2269 if sorted(table.column_names) != sorted(features): -> 2270 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2271 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: list<item: string> child 0, item: string _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: null to {'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Input In [8], in <cell line: 6>() 4 # load dataset 5 dataset = "ijmiller2/autotrain-data-betterbin-vision-10000" ----> 6 dataset = load_dataset(dataset) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/load.py:1782, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1779 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1781 # Download and prepare data -> 1782 builder_instance.download_and_prepare( 1783 download_config=download_config, 1784 download_mode=download_mode, 1785 verification_mode=verification_mode, 1786 try_from_hf_gcs=try_from_hf_gcs, 1787 num_proc=num_proc, 1788 ) 1790 # Build dataset for splits 1791 keep_in_memory = ( 1792 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1793 ) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:872, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 870 if num_proc is not None: 871 prepare_split_kwargs["num_proc"] = num_proc --> 872 self._download_and_prepare( 873 dl_manager=dl_manager, 874 verification_mode=verification_mode, 875 **prepare_split_kwargs, 876 **download_and_prepare_kwargs, 877 ) 878 # Sync info 879 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:967, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 963 split_dict.add(split_generator.split_info) 965 try: 966 # Prepare split will record examples associated to the split --> 967 self._prepare_split(split_generator, **prepare_split_kwargs) 968 except OSError as e: 969 raise OSError( 970 "Cannot find data file. " 971 + (self.manual_download_instructions or "") 972 + "\nOriginal error:\n" 973 + str(e) 974 ) from None File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1749, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1747 job_id = 0 1748 with pbar: -> 1749 for job_id, done, content in self._prepare_split_single( 1750 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1751 ): 1752 if done: 1753 result = content File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1892, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1891 e = e.__context__ -> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior I'm ultimately trying to generate my own performance metrics on validation data (before putting an endpoint into production) and so was hoping to load all or at least the validation subset from the hub. I'm expecting the `load_dataset()` function to work as shown in the documentation [here](https://huggingface.co/docs/datasets/loading#hugging-face-hub): ```python dataset = load_dataset( "lhoestq/custom_squad", revision="main" # tag name, or branch name, or commit hash ) ``` ### Environment info - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5627/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5626/comments
https://api.github.com/repos/huggingface/datasets/issues/5626/events
https://github.com/huggingface/datasets/pull/5626
1,619,252,984
PR_kwDODunzps5LyBT4
5,626
Support streaming datasets with numpy.load
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006607 / 0.011353 (-0.004746) | 0.004610 / 0.011008 (-0.006398) | 0.100673 / 0.038508 (0.062165) | 0.027739 / 0.023109 (0.004630) | 0.326290 / 0.275898 (0.050392) | 0.344296 / 0.323480 (0.020816) | 0.005021 / 0.007986 (-0.002964) | 0.003327 / 0.004328 (-0.001002) | 0.077779 / 0.004250 (0.073529) | 0.040237 / 0.037052 (0.003185) | 0.308992 / 0.258489 (0.050503) | 0.355017 / 0.293841 (0.061176) | 0.031203 / 0.128546 (-0.097343) | 0.011749 / 0.075646 (-0.063898) | 0.327431 / 0.419271 (-0.091840) | 0.043033 / 0.043533 (-0.000500) | 0.309713 / 0.255139 (0.054574) | 0.336550 / 0.283200 (0.053351) | 0.084891 / 0.141683 (-0.056792) | 1.555641 / 1.452155 (0.103487) | 1.613214 / 1.492716 (0.120497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216269 / 0.018006 (0.198262) | 0.422066 / 0.000490 (0.421576) | 0.004055 / 0.000200 (0.003855) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023759 / 0.037411 (-0.013652) | 0.096937 / 0.014526 (0.082411) | 0.105312 / 0.176557 (-0.071244) | 0.167840 / 0.737135 (-0.569295) | 0.107998 / 0.296338 (-0.188340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458315 / 0.215209 (0.243106) | 4.584803 / 2.077655 (2.507148) | 2.193641 / 1.504120 (0.689521) | 1.981494 / 1.541195 (0.440299) | 2.020358 / 1.468490 (0.551868) | 0.696763 / 4.584777 (-3.888014) | 3.388432 / 3.745712 (-0.357280) | 3.335038 / 5.269862 (-1.934823) | 1.648551 / 4.565676 (-2.917126) | 0.083753 / 0.424275 (-0.340522) | 0.012855 / 0.007607 (0.005248) | 0.562331 / 0.226044 (0.336286) | 5.649259 / 2.268929 (3.380330) | 2.680309 / 55.444624 (-52.764315) | 2.319297 / 6.876477 (-4.557180) | 2.444016 / 2.142072 (0.301943) | 0.809821 / 4.805227 (-3.995407) | 0.152855 / 6.500664 (-6.347809) | 0.067756 / 0.075469 (-0.007713) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.213318 / 1.841788 (-0.628470) | 13.887822 / 8.074308 (5.813514) | 14.276325 / 10.191392 (4.084933) | 0.156227 / 0.680424 (-0.524197) | 0.016377 / 0.534201 (-0.517824) | 0.377080 / 0.579283 (-0.202203) | 0.386561 / 0.434364 (-0.047803) | 0.435631 / 0.540337 (-0.104707) | 0.520863 / 1.386936 (-0.866073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006740 / 0.011353 (-0.004613) | 0.004704 / 0.011008 (-0.006304) | 0.076840 / 0.038508 (0.038331) | 0.027519 / 0.023109 (0.004409) | 0.343219 / 0.275898 (0.067321) | 0.376810 / 0.323480 (0.053330) | 0.005048 / 0.007986 (-0.002938) | 0.003356 / 0.004328 (-0.000972) | 0.077098 / 0.004250 (0.072848) | 0.038601 / 0.037052 (0.001548) | 0.345723 / 0.258489 (0.087233) | 0.388635 / 0.293841 (0.094794) | 0.033612 / 0.128546 (-0.094934) | 0.011689 / 0.075646 (-0.063957) | 0.086446 / 0.419271 (-0.332825) | 0.044390 / 0.043533 (0.000857) | 0.343763 / 0.255139 (0.088624) | 0.368591 / 0.283200 (0.085392) | 0.091605 / 0.141683 (-0.050078) | 1.478615 / 1.452155 (0.026461) | 1.580858 / 1.492716 (0.088142) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223547 / 0.018006 (0.205541) | 0.411243 / 0.000490 (0.410753) | 0.000916 / 0.000200 (0.000716) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025223 / 0.037411 (-0.012189) | 0.100970 / 0.014526 (0.086445) | 0.108178 / 0.176557 (-0.068378) | 0.156827 / 0.737135 (-0.580308) | 0.111431 / 0.296338 (-0.184907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434168 / 0.215209 (0.218959) | 4.361874 / 2.077655 (2.284219) | 2.060735 / 1.504120 (0.556615) | 1.861100 / 1.541195 (0.319906) | 1.920692 / 1.468490 (0.452202) | 0.697909 / 4.584777 (-3.886868) | 3.477036 / 3.745712 (-0.268676) | 3.002469 / 5.269862 (-2.267392) | 1.449325 / 4.565676 (-3.116351) | 0.083034 / 0.424275 (-0.341241) | 0.012805 / 0.007607 (0.005198) | 0.531391 / 0.226044 (0.305347) | 5.323015 / 2.268929 (3.054086) | 2.488605 / 55.444624 (-52.956020) | 2.158254 / 6.876477 (-4.718222) | 2.189633 / 2.142072 (0.047560) | 0.805972 / 4.805227 (-3.999256) | 0.153105 / 6.500664 (-6.347559) | 0.068909 / 0.075469 (-0.006561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276851 / 1.841788 (-0.564937) | 14.431510 / 8.074308 (6.357202) | 14.544788 / 10.191392 (4.353396) | 0.146589 / 0.680424 (-0.533835) | 0.016890 / 0.534201 (-0.517311) | 0.379897 / 0.579283 (-0.199387) | 0.389153 / 0.434364 (-0.045211) | 0.440097 / 0.540337 (-0.100241) | 0.524191 / 1.386936 (-0.862745) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e1af108015e43f9df8734a1faeeaeb9eafce3971 \"CML watermark\")\n" ]
1,678,466,019,000
1,679,380,565,000
1,679,380,134,000
MEMBER
null
Support streaming datasets with `numpy.load`. See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5626/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5626", "html_url": "https://github.com/huggingface/datasets/pull/5626", "diff_url": "https://github.com/huggingface/datasets/pull/5626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5626.patch", "merged_at": "2023-03-21T06:28:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/5625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5625/comments
https://api.github.com/repos/huggingface/datasets/issues/5625/events
https://github.com/huggingface/datasets/issues/5625
1,618,971,855
I_kwDODunzps5gf4zP
5,625
Allow "jsonl" data type signifier
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "You can use \"json\" instead. It doesn't work by extension names, but rather by dataset builder names, e.g. \"text\", \"imagefolder\", etc. I don't think the example in `transformers` is correct because of that", "Yes, I understand the reasoning but this issue is to propose that the example in transformers (while incorrect) \"makes sense\" in terms of user expectation. So the question is whether it would be possible to add \"aliases\" for common types (like \"json\" and \"text\") based on common extensions (like jsonl and txt)?" ]
1,678,454,508,000
1,678,530,939,000
null
CONTRIBUTOR
null
### Feature request `load_dataset` currently does not accept `jsonl` as type but only `json`. ### Motivation I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because ``` FileNotFoundError: Couldn't find a dataset script at jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Dataset 'jsonl' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`. ``` The reason is because the script has these lines to extract the data type by its extension. Therefore, the derived type is `jsonl` which is not recognized by datasets as the error above shows. https://github.com/huggingface/transformers/blob/ade26bf9912f69e2110137443e4406d7dbe253e7/examples/pytorch/translation/run_translation.py#L342-L356 I suppose you could argue that this is the script's fault (in which case I'll do a PR over at `transformers`) but it makes sense to me to add `jsonl` as an alias to `json` in `datasets`. ### Your contribution At the moment I cannot work on this. I think it can be as "easy" as having an alias for json, namely jsonl.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5625/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
https://api.github.com/repos/huggingface/datasets/issues/5624/events
https://github.com/huggingface/datasets/issues/5624
1,617,400,192
I_kwDODunzps5gZ5GA
5,624
glue datasets returning -1 for test split
{ "login": "lithafnium", "id": 8939967, "node_id": "MDQ6VXNlcjg5Mzk5Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lithafnium", "html_url": "https://github.com/lithafnium", "followers_url": "https://api.github.com/users/lithafnium/followers", "following_url": "https://api.github.com/users/lithafnium/following{/other_user}", "gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}", "starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions", "organizations_url": "https://api.github.com/users/lithafnium/orgs", "repos_url": "https://api.github.com/users/lithafnium/repos", "events_url": "https://api.github.com/users/lithafnium/events{/privacy}", "received_events_url": "https://api.github.com/users/lithafnium/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answered: https://huggingface.co/datasets/glue/discussions/5#63907885937867f0cb3cde31\r\n> The test labels are not public.\r\n>\r\n> Note this dataset belongs to a benchmark: people send their predictions for the test split to GLUE (https://gluebenchmark.com/) and then they get a score in their leaderboard...\r\n" ]
1,678,373,238,000
1,678,380,569,000
1,678,380,569,000
NONE
null
### Describe the bug Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online. ### Steps to reproduce the bug ``` dataset = load_dataset("glue", "sst2") for d in dataset: # prints out -1 print(d["label"] ``` ### Expected behavior Expected behavior should be 0/1 instead of -1. ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5623/comments
https://api.github.com/repos/huggingface/datasets/issues/5623/events
https://github.com/huggingface/datasets/pull/5623
1,616,712,665
PR_kwDODunzps5Lpb4q
5,623
Remove set_access_token usage + fail tests if FutureWarning
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008505 / 0.011353 (-0.002848) | 0.004445 / 0.011008 (-0.006563) | 0.102197 / 0.038508 (0.063689) | 0.029886 / 0.023109 (0.006776) | 0.305387 / 0.275898 (0.029489) | 0.355986 / 0.323480 (0.032507) | 0.006814 / 0.007986 (-0.001172) | 0.003298 / 0.004328 (-0.001030) | 0.079204 / 0.004250 (0.074954) | 0.035618 / 0.037052 (-0.001434) | 0.320430 / 0.258489 (0.061941) | 0.353330 / 0.293841 (0.059490) | 0.033280 / 0.128546 (-0.095266) | 0.011300 / 0.075646 (-0.064347) | 0.324627 / 0.419271 (-0.094644) | 0.040405 / 0.043533 (-0.003128) | 0.308760 / 0.255139 (0.053621) | 0.331885 / 0.283200 (0.048685) | 0.084605 / 0.141683 (-0.057077) | 1.576598 / 1.452155 (0.124443) | 1.530694 / 1.492716 (0.037977) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191142 / 0.018006 (0.173136) | 0.404042 / 0.000490 (0.403552) | 0.001185 / 0.000200 (0.000985) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022889 / 0.037411 (-0.014523) | 0.095862 / 0.014526 (0.081336) | 0.104382 / 0.176557 (-0.072175) | 0.139407 / 0.737135 (-0.597728) | 0.106813 / 0.296338 (-0.189525) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419083 / 0.215209 (0.203874) | 4.188702 / 2.077655 (2.111047) | 1.897854 / 1.504120 (0.393734) | 1.689544 / 1.541195 (0.148350) | 1.714032 / 1.468490 (0.245542) | 0.695541 / 4.584777 (-3.889236) | 3.370584 / 3.745712 (-0.375128) | 3.205549 / 5.269862 (-2.064313) | 1.641202 / 4.565676 (-2.924474) | 0.081849 / 0.424275 (-0.342426) | 0.012043 / 0.007607 (0.004436) | 0.529618 / 0.226044 (0.303574) | 5.314167 / 2.268929 (3.045238) | 2.357271 / 55.444624 (-53.087353) | 1.979684 / 6.876477 (-4.896793) | 2.030057 / 2.142072 (-0.112015) | 0.813013 / 4.805227 (-3.992214) | 0.150165 / 6.500664 (-6.350499) | 0.064595 / 0.075469 (-0.010874) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237824 / 1.841788 (-0.603964) | 13.552178 / 8.074308 (5.477870) | 14.089433 / 10.191392 (3.898041) | 0.149325 / 0.680424 (-0.531099) | 0.028543 / 0.534201 (-0.505658) | 0.396848 / 0.579283 (-0.182435) | 0.396230 / 0.434364 (-0.038134) | 0.466317 / 0.540337 (-0.074021) | 0.539579 / 1.386936 (-0.847357) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006224 / 0.011353 (-0.005128) | 0.004429 / 0.011008 (-0.006579) | 0.075740 / 0.038508 (0.037232) | 0.026717 / 0.023109 (0.003608) | 0.341685 / 0.275898 (0.065787) | 0.383671 / 0.323480 (0.060191) | 0.004682 / 0.007986 (-0.003304) | 0.004681 / 0.004328 (0.000352) | 0.076638 / 0.004250 (0.072387) | 0.034577 / 0.037052 (-0.002476) | 0.341160 / 0.258489 (0.082671) | 0.407590 / 0.293841 (0.113749) | 0.031121 / 0.128546 (-0.097425) | 0.011479 / 0.075646 (-0.064167) | 0.085299 / 0.419271 (-0.333973) | 0.042005 / 0.043533 (-0.001528) | 0.339682 / 0.255139 (0.084543) | 0.377669 / 0.283200 (0.094469) | 0.087751 / 0.141683 (-0.053932) | 1.523910 / 1.452155 (0.071756) | 1.607487 / 1.492716 (0.114771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225605 / 0.018006 (0.207599) | 0.395851 / 0.000490 (0.395361) | 0.004404 / 0.000200 (0.004204) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024489 / 0.037411 (-0.012922) | 0.099813 / 0.014526 (0.085287) | 0.107392 / 0.176557 (-0.069165) | 0.139567 / 0.737135 (-0.597568) | 0.110080 / 0.296338 (-0.186258) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449051 / 0.215209 (0.233841) | 4.463098 / 2.077655 (2.385443) | 2.122548 / 1.504120 (0.618428) | 1.913863 / 1.541195 (0.372669) | 1.963988 / 1.468490 (0.495498) | 0.698442 / 4.584777 (-3.886335) | 3.330425 / 3.745712 (-0.415287) | 1.867843 / 5.269862 (-3.402019) | 1.163740 / 4.565676 (-3.401937) | 0.083209 / 0.424275 (-0.341066) | 0.012594 / 0.007607 (0.004987) | 0.547074 / 0.226044 (0.321030) | 5.474779 / 2.268929 (3.205851) | 2.548025 / 55.444624 (-52.896599) | 2.202435 / 6.876477 (-4.674041) | 2.220330 / 2.142072 (0.078257) | 0.810104 / 4.805227 (-3.995124) | 0.151141 / 6.500664 (-6.349523) | 0.066204 / 0.075469 (-0.009265) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272075 / 1.841788 (-0.569712) | 13.749523 / 8.074308 (5.675215) | 14.270974 / 10.191392 (4.079582) | 0.141285 / 0.680424 (-0.539139) | 0.016526 / 0.534201 (-0.517675) | 0.393175 / 0.579283 (-0.186109) | 0.391577 / 0.434364 (-0.042787) | 0.492824 / 0.540337 (-0.047513) | 0.580069 / 1.386936 (-0.806867) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1cda14136c9f79c763c17d49b77eabfb233fbb35 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008901 / 0.011353 (-0.002452) | 0.005017 / 0.011008 (-0.005991) | 0.099340 / 0.038508 (0.060832) | 0.034218 / 0.023109 (0.011109) | 0.295927 / 0.275898 (0.020029) | 0.330087 / 0.323480 (0.006607) | 0.008041 / 0.007986 (0.000056) | 0.005013 / 0.004328 (0.000685) | 0.074255 / 0.004250 (0.070004) | 0.049634 / 0.037052 (0.012582) | 0.299972 / 0.258489 (0.041483) | 0.349879 / 0.293841 (0.056038) | 0.038500 / 0.128546 (-0.090047) | 0.011980 / 0.075646 (-0.063666) | 0.332408 / 0.419271 (-0.086863) | 0.048385 / 0.043533 (0.004852) | 0.300393 / 0.255139 (0.045254) | 0.316972 / 0.283200 (0.033772) | 0.101674 / 0.141683 (-0.040009) | 1.424300 / 1.452155 (-0.027854) | 1.520658 / 1.492716 (0.027942) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270084 / 0.018006 (0.252078) | 0.538612 / 0.000490 (0.538123) | 0.004439 / 0.000200 (0.004240) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026841 / 0.037411 (-0.010570) | 0.106454 / 0.014526 (0.091928) | 0.118371 / 0.176557 (-0.058186) | 0.155545 / 0.737135 (-0.581590) | 0.125119 / 0.296338 (-0.171220) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395794 / 0.215209 (0.180585) | 3.958195 / 2.077655 (1.880540) | 1.789010 / 1.504120 (0.284890) | 1.601380 / 1.541195 (0.060186) | 1.641062 / 1.468490 (0.172572) | 0.679547 / 4.584777 (-3.905230) | 3.778018 / 3.745712 (0.032306) | 2.101232 / 5.269862 (-3.168630) | 1.463932 / 4.565676 (-3.101745) | 0.083639 / 0.424275 (-0.340636) | 0.012339 / 0.007607 (0.004732) | 0.498708 / 0.226044 (0.272663) | 4.995178 / 2.268929 (2.726249) | 2.272650 / 55.444624 (-53.171975) | 1.907879 / 6.876477 (-4.968598) | 2.012666 / 2.142072 (-0.129407) | 0.829564 / 4.805227 (-3.975663) | 0.165049 / 6.500664 (-6.335615) | 0.062291 / 0.075469 (-0.013178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193977 / 1.841788 (-0.647811) | 14.816939 / 8.074308 (6.742631) | 14.369729 / 10.191392 (4.178337) | 0.156339 / 0.680424 (-0.524084) | 0.029151 / 0.534201 (-0.505050) | 0.449362 / 0.579283 (-0.129921) | 0.451895 / 0.434364 (0.017531) | 0.520324 / 0.540337 (-0.020013) | 0.610716 / 1.386936 (-0.776220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007145 / 0.011353 (-0.004207) | 0.005299 / 0.011008 (-0.005710) | 0.074216 / 0.038508 (0.035708) | 0.033015 / 0.023109 (0.009906) | 0.337117 / 0.275898 (0.061219) | 0.367161 / 0.323480 (0.043682) | 0.005898 / 0.007986 (-0.002088) | 0.005283 / 0.004328 (0.000955) | 0.073795 / 0.004250 (0.069544) | 0.049253 / 0.037052 (0.012201) | 0.343327 / 0.258489 (0.084838) | 0.396417 / 0.293841 (0.102576) | 0.037162 / 0.128546 (-0.091384) | 0.012456 / 0.075646 (-0.063191) | 0.086668 / 0.419271 (-0.332604) | 0.049937 / 0.043533 (0.006404) | 0.335138 / 0.255139 (0.079999) | 0.358111 / 0.283200 (0.074912) | 0.107328 / 0.141683 (-0.034355) | 1.482290 / 1.452155 (0.030135) | 1.557872 / 1.492716 (0.065156) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.343759 / 0.018006 (0.325752) | 0.542697 / 0.000490 (0.542207) | 0.025943 / 0.000200 (0.025743) | 0.000264 / 0.000054 (0.000209) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028469 / 0.037411 (-0.008943) | 0.108620 / 0.014526 (0.094094) | 0.123667 / 0.176557 (-0.052890) | 0.168829 / 0.737135 (-0.568306) | 0.125875 / 0.296338 (-0.170464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424640 / 0.215209 (0.209431) | 4.227611 / 2.077655 (2.149956) | 2.003605 / 1.504120 (0.499486) | 1.810696 / 1.541195 (0.269501) | 1.882700 / 1.468490 (0.414210) | 0.701361 / 4.584777 (-3.883416) | 3.808054 / 3.745712 (0.062342) | 3.234896 / 5.269862 (-2.034966) | 1.872195 / 4.565676 (-2.693482) | 0.088102 / 0.424275 (-0.336173) | 0.012810 / 0.007607 (0.005203) | 0.551855 / 0.226044 (0.325810) | 5.245654 / 2.268929 (2.976725) | 2.557123 / 55.444624 (-52.887502) | 2.238897 / 6.876477 (-4.637580) | 2.256260 / 2.142072 (0.114187) | 0.849804 / 4.805227 (-3.955424) | 0.170557 / 6.500664 (-6.330107) | 0.064718 / 0.075469 (-0.010751) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271701 / 1.841788 (-0.570087) | 14.925010 / 8.074308 (6.850702) | 14.966948 / 10.191392 (4.775556) | 0.162966 / 0.680424 (-0.517458) | 0.017618 / 0.534201 (-0.516583) | 0.433484 / 0.579283 (-0.145799) | 0.430047 / 0.434364 (-0.004316) | 0.537356 / 0.540337 (-0.002981) | 0.639237 / 1.386936 (-0.747699) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aba888cb4d225b1a05596f52258a079bda98df70 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012054 / 0.011353 (0.000702) | 0.005923 / 0.011008 (-0.005085) | 0.129531 / 0.038508 (0.091023) | 0.036283 / 0.023109 (0.013173) | 0.374406 / 0.275898 (0.098508) | 0.452538 / 0.323480 (0.129058) | 0.009419 / 0.007986 (0.001434) | 0.004783 / 0.004328 (0.000454) | 0.095292 / 0.004250 (0.091042) | 0.041290 / 0.037052 (0.004238) | 0.403940 / 0.258489 (0.145451) | 0.443091 / 0.293841 (0.149250) | 0.054635 / 0.128546 (-0.073911) | 0.019062 / 0.075646 (-0.056584) | 0.417053 / 0.419271 (-0.002218) | 0.060865 / 0.043533 (0.017332) | 0.378535 / 0.255139 (0.123396) | 0.401036 / 0.283200 (0.117836) | 0.122959 / 0.141683 (-0.018724) | 1.768517 / 1.452155 (0.316362) | 1.794700 / 1.492716 (0.301984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246529 / 0.018006 (0.228523) | 0.576887 / 0.000490 (0.576397) | 0.005031 / 0.000200 (0.004831) | 0.000125 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027363 / 0.037411 (-0.010049) | 0.119037 / 0.014526 (0.104511) | 0.148109 / 0.176557 (-0.028447) | 0.179370 / 0.737135 (-0.557765) | 0.145105 / 0.296338 (-0.151234) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.588748 / 0.215209 (0.373539) | 5.934433 / 2.077655 (3.856778) | 2.549811 / 1.504120 (1.045691) | 2.234616 / 1.541195 (0.693421) | 2.268002 / 1.468490 (0.799512) | 1.154643 / 4.584777 (-3.430134) | 5.333935 / 3.745712 (1.588223) | 2.971065 / 5.269862 (-2.298796) | 2.131427 / 4.565676 (-2.434250) | 0.127737 / 0.424275 (-0.296538) | 0.014699 / 0.007607 (0.007091) | 0.735160 / 0.226044 (0.509115) | 7.403838 / 2.268929 (5.134909) | 3.298169 / 55.444624 (-52.146455) | 2.661285 / 6.876477 (-4.215192) | 2.688877 / 2.142072 (0.546805) | 1.344110 / 4.805227 (-3.461118) | 0.242016 / 6.500664 (-6.258648) | 0.077418 / 0.075469 (0.001948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.566426 / 1.841788 (-0.275362) | 17.144308 / 8.074308 (9.070000) | 19.360598 / 10.191392 (9.169206) | 0.238554 / 0.680424 (-0.441870) | 0.044946 / 0.534201 (-0.489255) | 0.554183 / 0.579283 (-0.025100) | 0.630175 / 0.434364 (0.195811) | 0.630319 / 0.540337 (0.089982) | 0.745060 / 1.386936 (-0.641876) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009255 / 0.011353 (-0.002098) | 0.006951 / 0.011008 (-0.004057) | 0.092021 / 0.038508 (0.053513) | 0.035588 / 0.023109 (0.012479) | 0.415564 / 0.275898 (0.139666) | 0.446393 / 0.323480 (0.122913) | 0.006532 / 0.007986 (-0.001453) | 0.005099 / 0.004328 (0.000771) | 0.094801 / 0.004250 (0.090550) | 0.044926 / 0.037052 (0.007874) | 0.439125 / 0.258489 (0.180636) | 0.473004 / 0.293841 (0.179163) | 0.057025 / 0.128546 (-0.071522) | 0.018711 / 0.075646 (-0.056935) | 0.110844 / 0.419271 (-0.308427) | 0.058347 / 0.043533 (0.014814) | 0.435721 / 0.255139 (0.180583) | 0.434624 / 0.283200 (0.151424) | 0.114505 / 0.141683 (-0.027178) | 1.722379 / 1.452155 (0.270225) | 1.775836 / 1.492716 (0.283120) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275893 / 0.018006 (0.257887) | 0.552590 / 0.000490 (0.552100) | 0.007919 / 0.000200 (0.007719) | 0.000122 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030003 / 0.037411 (-0.007408) | 0.130145 / 0.014526 (0.115619) | 0.131878 / 0.176557 (-0.044678) | 0.194693 / 0.737135 (-0.542442) | 0.137689 / 0.296338 (-0.158650) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.619591 / 0.215209 (0.404382) | 6.324095 / 2.077655 (4.246441) | 2.756563 / 1.504120 (1.252444) | 2.384744 / 1.541195 (0.843549) | 2.450407 / 1.468490 (0.981917) | 1.235391 / 4.584777 (-3.349386) | 5.535383 / 3.745712 (1.789671) | 4.831927 / 5.269862 (-0.437934) | 2.757158 / 4.565676 (-1.808519) | 0.133980 / 0.424275 (-0.290295) | 0.014965 / 0.007607 (0.007358) | 0.731423 / 0.226044 (0.505379) | 7.401850 / 2.268929 (5.132921) | 3.346585 / 55.444624 (-52.098039) | 2.705523 / 6.876477 (-4.170953) | 2.637397 / 2.142072 (0.495324) | 1.347745 / 4.805227 (-3.457482) | 0.248658 / 6.500664 (-6.252006) | 0.077427 / 0.075469 (0.001958) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.520860 / 1.841788 (-0.320928) | 17.153000 / 8.074308 (9.078692) | 19.051393 / 10.191392 (8.860001) | 0.236840 / 0.680424 (-0.443584) | 0.026638 / 0.534201 (-0.507563) | 0.518417 / 0.579283 (-0.060866) | 0.607555 / 0.434364 (0.173191) | 0.637381 / 0.540337 (0.097044) | 0.767109 / 1.386936 (-0.619827) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ee291f2c5e68a782c82f916e250d470a7e285e7 \"CML watermark\")\n", "Great, I merged it. Thanks for the review :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006711 / 0.011353 (-0.004641) | 0.004472 / 0.011008 (-0.006536) | 0.099581 / 0.038508 (0.061073) | 0.028036 / 0.023109 (0.004927) | 0.301197 / 0.275898 (0.025298) | 0.339341 / 0.323480 (0.015861) | 0.005107 / 0.007986 (-0.002879) | 0.003312 / 0.004328 (-0.001017) | 0.075823 / 0.004250 (0.071573) | 0.040861 / 0.037052 (0.003809) | 0.303407 / 0.258489 (0.044918) | 0.350717 / 0.293841 (0.056876) | 0.031657 / 0.128546 (-0.096889) | 0.011627 / 0.075646 (-0.064020) | 0.325465 / 0.419271 (-0.093806) | 0.052671 / 0.043533 (0.009138) | 0.301953 / 0.255139 (0.046814) | 0.327164 / 0.283200 (0.043964) | 0.091264 / 0.141683 (-0.050419) | 1.508947 / 1.452155 (0.056792) | 1.605685 / 1.492716 (0.112968) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202977 / 0.018006 (0.184971) | 0.400602 / 0.000490 (0.400112) | 0.003253 / 0.000200 (0.003053) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022453 / 0.037411 (-0.014958) | 0.098633 / 0.014526 (0.084107) | 0.105996 / 0.176557 (-0.070561) | 0.162428 / 0.737135 (-0.574707) | 0.107139 / 0.296338 (-0.189199) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453061 / 0.215209 (0.237852) | 4.530844 / 2.077655 (2.453190) | 2.286394 / 1.504120 (0.782274) | 2.076479 / 1.541195 (0.535284) | 2.143730 / 1.468490 (0.675240) | 0.702540 / 4.584777 (-3.882237) | 3.442688 / 3.745712 (-0.303024) | 1.874429 / 5.269862 (-3.395433) | 1.172331 / 4.565676 (-3.393346) | 0.083643 / 0.424275 (-0.340632) | 0.012519 / 0.007607 (0.004911) | 0.556859 / 0.226044 (0.330814) | 5.582843 / 2.268929 (3.313915) | 2.753734 / 55.444624 (-52.690890) | 2.415771 / 6.876477 (-4.460705) | 2.531428 / 2.142072 (0.389356) | 0.813005 / 4.805227 (-3.992222) | 0.153322 / 6.500664 (-6.347343) | 0.068061 / 0.075469 (-0.007408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180481 / 1.841788 (-0.661306) | 13.623933 / 8.074308 (5.549625) | 14.431288 / 10.191392 (4.239896) | 0.127580 / 0.680424 (-0.552844) | 0.016714 / 0.534201 (-0.517487) | 0.394236 / 0.579283 (-0.185047) | 0.381718 / 0.434364 (-0.052646) | 0.486749 / 0.540337 (-0.053589) | 0.565939 / 1.386936 (-0.820997) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006720 / 0.011353 (-0.004633) | 0.004518 / 0.011008 (-0.006491) | 0.076819 / 0.038508 (0.038311) | 0.027272 / 0.023109 (0.004163) | 0.340890 / 0.275898 (0.064992) | 0.381435 / 0.323480 (0.057955) | 0.004980 / 0.007986 (-0.003005) | 0.003382 / 0.004328 (-0.000947) | 0.076368 / 0.004250 (0.072117) | 0.037365 / 0.037052 (0.000313) | 0.341484 / 0.258489 (0.082995) | 0.388917 / 0.293841 (0.095076) | 0.032004 / 0.128546 (-0.096543) | 0.011612 / 0.075646 (-0.064034) | 0.084929 / 0.419271 (-0.334342) | 0.041861 / 0.043533 (-0.001671) | 0.350392 / 0.255139 (0.095253) | 0.369745 / 0.283200 (0.086546) | 0.088301 / 0.141683 (-0.053382) | 1.587296 / 1.452155 (0.135141) | 1.629761 / 1.492716 (0.137045) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174825 / 0.018006 (0.156818) | 0.414371 / 0.000490 (0.413881) | 0.001595 / 0.000200 (0.001395) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025403 / 0.037411 (-0.012009) | 0.099593 / 0.014526 (0.085067) | 0.108819 / 0.176557 (-0.067738) | 0.161613 / 0.737135 (-0.575523) | 0.112302 / 0.296338 (-0.184037) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439234 / 0.215209 (0.224024) | 4.389073 / 2.077655 (2.311418) | 2.063215 / 1.504120 (0.559095) | 1.852550 / 1.541195 (0.311356) | 1.920014 / 1.468490 (0.451524) | 0.710255 / 4.584777 (-3.874522) | 3.430549 / 3.745712 (-0.315164) | 1.886072 / 5.269862 (-3.383790) | 1.177490 / 4.565676 (-3.388186) | 0.084877 / 0.424275 (-0.339398) | 0.012894 / 0.007607 (0.005287) | 0.544950 / 0.226044 (0.318906) | 5.467347 / 2.268929 (3.198419) | 2.508169 / 55.444624 (-52.936455) | 2.167756 / 6.876477 (-4.708721) | 2.212817 / 2.142072 (0.070744) | 0.824762 / 4.805227 (-3.980465) | 0.154387 / 6.500664 (-6.346277) | 0.068535 / 0.075469 (-0.006934) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284165 / 1.841788 (-0.557623) | 14.153006 / 8.074308 (6.078697) | 14.152569 / 10.191392 (3.961177) | 0.130083 / 0.680424 (-0.550341) | 0.016556 / 0.534201 (-0.517645) | 0.383828 / 0.579283 (-0.195455) | 0.388241 / 0.434364 (-0.046123) | 0.477982 / 0.540337 (-0.062355) | 0.565583 / 1.386936 (-0.821353) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f1e7442d34a059ff377437381542cc762feab057 \"CML watermark\")\n" ]
1,678,351,561,000
1,678,376,340,000
1,678,375,919,000
CONTRIBUTOR
null
`set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`. This PR removes it from the tests (it was not used in `datasets` source code itself). FYI, it was not needed since `set_access_token` was just setting git credentials and `datasets` doesn't seem to use git anywhere. In the future, use `set_git_credential` if needed. It is a git-credential-agnostic helper, i.e. you can store your git token in `git-credential-cache`, `git-credential-store`, `osxkeychain`, etc. The legacy `set_access_token` could only set in `git-credential-store` no matter the user preference. (for context, I found out about this while working on https://github.com/huggingface/huggingface_hub/pull/1381) --- In addition to this, I have added ``` filterwarnings = error::FutureWarning:huggingface_hub* ``` to the `setup.cfg` config file to fail on future warnings from `huggingface_hub`. In `hfh`'s CI we trigger on FutureWarning from any package but it's less robust (any package update leads can lead to a failure). No obligation to keep it like that (I can remove it if you prefer) but I think it's a good idea in order to track future FutureWarnings. FYI, in `huggingface_hub` tests we use `-Werror::FutureWarning --log-cli-level=INFO -sv --durations=0` - FutureWarning are processed as error - verbose mode / INFO logs (and above) are captured for easier debugging in github report - track each test duration, just to see where we can improve. We have a quite long CI (~10min) so it helped improve that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5623/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5623", "html_url": "https://github.com/huggingface/datasets/pull/5623", "diff_url": "https://github.com/huggingface/datasets/pull/5623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5623.patch", "merged_at": "2023-03-09T15:31:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5622/comments
https://api.github.com/repos/huggingface/datasets/issues/5622/events
https://github.com/huggingface/datasets/pull/5622
1,615,190,942
PR_kwDODunzps5LkSj8
5,622
Update README template to better template
{ "login": "emiltj", "id": 54767532, "node_id": "MDQ6VXNlcjU0NzY3NTMy", "avatar_url": "https://avatars.githubusercontent.com/u/54767532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emiltj", "html_url": "https://github.com/emiltj", "followers_url": "https://api.github.com/users/emiltj/followers", "following_url": "https://api.github.com/users/emiltj/following{/other_user}", "gists_url": "https://api.github.com/users/emiltj/gists{/gist_id}", "starred_url": "https://api.github.com/users/emiltj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emiltj/subscriptions", "organizations_url": "https://api.github.com/users/emiltj/orgs", "repos_url": "https://api.github.com/users/emiltj/repos", "events_url": "https://api.github.com/users/emiltj/events{/privacy}", "received_events_url": "https://api.github.com/users/emiltj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "IMO this template should stay generic.\r\n\r\nAlso, we now use [the card template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md) from `hugginface_hub` as the source of truth on the Hub (you now have the option to import it into the dataset card/README.md), so I think the next step would be deleting this template rather than updating it.", "Agreed, the PR was a mistake and meant for my own repo. My bad", "Feel free to close the PR then." ]
1,678,278,623,000
1,678,511,258,000
1,678,511,258,000
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5622/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5622", "html_url": "https://github.com/huggingface/datasets/pull/5622", "diff_url": "https://github.com/huggingface/datasets/pull/5622.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5622.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5621/comments
https://api.github.com/repos/huggingface/datasets/issues/5621/events
https://github.com/huggingface/datasets/pull/5621
1,615,029,615
PR_kwDODunzps5LjwD8
5,621
Adding Oracle Cloud to docs
{ "login": "ahosler", "id": 29129502, "node_id": "MDQ6VXNlcjI5MTI5NTAy", "avatar_url": "https://avatars.githubusercontent.com/u/29129502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahosler", "html_url": "https://github.com/ahosler", "followers_url": "https://api.github.com/users/ahosler/followers", "following_url": "https://api.github.com/users/ahosler/following{/other_user}", "gists_url": "https://api.github.com/users/ahosler/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahosler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahosler/subscriptions", "organizations_url": "https://api.github.com/users/ahosler/orgs", "repos_url": "https://api.github.com/users/ahosler/repos", "events_url": "https://api.github.com/users/ahosler/events{/privacy}", "received_events_url": "https://api.github.com/users/ahosler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006183 / 0.011353 (-0.005170) | 0.004377 / 0.011008 (-0.006631) | 0.096898 / 0.038508 (0.058390) | 0.027729 / 0.023109 (0.004620) | 0.336582 / 0.275898 (0.060684) | 0.353792 / 0.323480 (0.030312) | 0.004541 / 0.007986 (-0.003445) | 0.004349 / 0.004328 (0.000020) | 0.074403 / 0.004250 (0.070153) | 0.033918 / 0.037052 (-0.003134) | 0.341505 / 0.258489 (0.083016) | 0.380192 / 0.293841 (0.086351) | 0.031703 / 0.128546 (-0.096843) | 0.011561 / 0.075646 (-0.064086) | 0.321848 / 0.419271 (-0.097423) | 0.043407 / 0.043533 (-0.000126) | 0.330365 / 0.255139 (0.075226) | 0.364630 / 0.283200 (0.081430) | 0.084798 / 0.141683 (-0.056885) | 1.450908 / 1.452155 (-0.001246) | 1.522235 / 1.492716 (0.029519) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198267 / 0.018006 (0.180261) | 0.409554 / 0.000490 (0.409065) | 0.002501 / 0.000200 (0.002301) | 0.000270 / 0.000054 (0.000215) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021801 / 0.037411 (-0.015610) | 0.097429 / 0.014526 (0.082904) | 0.103259 / 0.176557 (-0.073298) | 0.161483 / 0.737135 (-0.575652) | 0.107843 / 0.296338 (-0.188496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427057 / 0.215209 (0.211848) | 4.259477 / 2.077655 (2.181823) | 1.945819 / 1.504120 (0.441699) | 1.733013 / 1.541195 (0.191819) | 1.748486 / 1.468490 (0.279996) | 0.702231 / 4.584777 (-3.882546) | 3.387608 / 3.745712 (-0.358104) | 1.890187 / 5.269862 (-3.379675) | 1.300465 / 4.565676 (-3.265211) | 0.083702 / 0.424275 (-0.340573) | 0.012674 / 0.007607 (0.005067) | 0.527978 / 0.226044 (0.301934) | 5.259610 / 2.268929 (2.990681) | 2.366512 / 55.444624 (-53.078113) | 2.013811 / 6.876477 (-4.862666) | 2.058175 / 2.142072 (-0.083898) | 0.815042 / 4.805227 (-3.990185) | 0.153496 / 6.500664 (-6.347168) | 0.065442 / 0.075469 (-0.010027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.227494 / 1.841788 (-0.614294) | 13.812921 / 8.074308 (5.738613) | 14.430149 / 10.191392 (4.238757) | 0.145422 / 0.680424 (-0.535002) | 0.016672 / 0.534201 (-0.517529) | 0.382126 / 0.579283 (-0.197157) | 0.388369 / 0.434364 (-0.045995) | 0.446133 / 0.540337 (-0.094204) | 0.531044 / 1.386936 (-0.855892) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006273 / 0.011353 (-0.005080) | 0.004557 / 0.011008 (-0.006452) | 0.077398 / 0.038508 (0.038890) | 0.027295 / 0.023109 (0.004185) | 0.340866 / 0.275898 (0.064968) | 0.373918 / 0.323480 (0.050438) | 0.004967 / 0.007986 (-0.003018) | 0.003337 / 0.004328 (-0.000991) | 0.076041 / 0.004250 (0.071791) | 0.036708 / 0.037052 (-0.000344) | 0.346126 / 0.258489 (0.087637) | 0.385177 / 0.293841 (0.091336) | 0.032272 / 0.128546 (-0.096275) | 0.011756 / 0.075646 (-0.063890) | 0.086512 / 0.419271 (-0.332759) | 0.049310 / 0.043533 (0.005777) | 0.339352 / 0.255139 (0.084213) | 0.372058 / 0.283200 (0.088859) | 0.089712 / 0.141683 (-0.051971) | 1.501964 / 1.452155 (0.049809) | 1.573753 / 1.492716 (0.081037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.162075 / 0.018006 (0.144069) | 0.391462 / 0.000490 (0.390973) | 0.002868 / 0.000200 (0.002668) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024176 / 0.037411 (-0.013235) | 0.099631 / 0.014526 (0.085105) | 0.107544 / 0.176557 (-0.069013) | 0.157659 / 0.737135 (-0.579477) | 0.111130 / 0.296338 (-0.185209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442086 / 0.215209 (0.226877) | 4.426311 / 2.077655 (2.348657) | 2.086133 / 1.504120 (0.582013) | 1.860415 / 1.541195 (0.319220) | 1.892306 / 1.468490 (0.423816) | 0.702752 / 4.584777 (-3.882025) | 3.394358 / 3.745712 (-0.351354) | 1.857396 / 5.269862 (-3.412466) | 1.167168 / 4.565676 (-3.398509) | 0.083549 / 0.424275 (-0.340726) | 0.012780 / 0.007607 (0.005173) | 0.547075 / 0.226044 (0.321031) | 5.466619 / 2.268929 (3.197691) | 2.548893 / 55.444624 (-52.895731) | 2.185574 / 6.876477 (-4.690903) | 2.188000 / 2.142072 (0.045928) | 0.810370 / 4.805227 (-3.994857) | 0.153320 / 6.500664 (-6.347344) | 0.068409 / 0.075469 (-0.007060) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330431 / 1.841788 (-0.511356) | 14.178916 / 8.074308 (6.104608) | 14.409594 / 10.191392 (4.218202) | 0.156270 / 0.680424 (-0.524154) | 0.016452 / 0.534201 (-0.517749) | 0.379837 / 0.579283 (-0.199447) | 0.389896 / 0.434364 (-0.044468) | 0.443892 / 0.540337 (-0.096446) | 0.531392 / 1.386936 (-0.855544) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e502117cafd92fd9c25d1d6dd047cc650c691629 \"CML watermark\")\n" ]
1,678,270,970,000
1,678,496,238,000
1,678,495,796,000
CONTRIBUTOR
null
Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5621/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5621", "html_url": "https://github.com/huggingface/datasets/pull/5621", "diff_url": "https://github.com/huggingface/datasets/pull/5621.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5621.patch", "merged_at": "2023-03-11T00:49:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/5620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5620/comments
https://api.github.com/repos/huggingface/datasets/issues/5620/events
https://github.com/huggingface/datasets/pull/5620
1,613,460,520
PR_kwDODunzps5LefAf
5,620
Bump pyarrow to 8.0.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009873 / 0.011353 (-0.001480) | 0.005180 / 0.011008 (-0.005828) | 0.099587 / 0.038508 (0.061079) | 0.035674 / 0.023109 (0.012565) | 0.299156 / 0.275898 (0.023258) | 0.361253 / 0.323480 (0.037773) | 0.008159 / 0.007986 (0.000173) | 0.004245 / 0.004328 (-0.000084) | 0.076809 / 0.004250 (0.072559) | 0.045251 / 0.037052 (0.008199) | 0.306002 / 0.258489 (0.047513) | 0.345758 / 0.293841 (0.051917) | 0.037826 / 0.128546 (-0.090721) | 0.011887 / 0.075646 (-0.063759) | 0.333804 / 0.419271 (-0.085467) | 0.047859 / 0.043533 (0.004326) | 0.291866 / 0.255139 (0.036727) | 0.319356 / 0.283200 (0.036157) | 0.104241 / 0.141683 (-0.037442) | 1.443816 / 1.452155 (-0.008338) | 1.514654 / 1.492716 (0.021938) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009846 / 0.018006 (-0.008160) | 0.439488 / 0.000490 (0.438999) | 0.003227 / 0.000200 (0.003028) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027553 / 0.037411 (-0.009858) | 0.105337 / 0.014526 (0.090811) | 0.116203 / 0.176557 (-0.060354) | 0.161140 / 0.737135 (-0.575995) | 0.123002 / 0.296338 (-0.173336) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400102 / 0.215209 (0.184893) | 3.976748 / 2.077655 (1.899094) | 1.794763 / 1.504120 (0.290643) | 1.602477 / 1.541195 (0.061282) | 1.703689 / 1.468490 (0.235199) | 0.696751 / 4.584777 (-3.888026) | 3.713832 / 3.745712 (-0.031880) | 2.124536 / 5.269862 (-3.145326) | 1.313005 / 4.565676 (-3.252671) | 0.086130 / 0.424275 (-0.338146) | 0.012085 / 0.007607 (0.004477) | 0.512976 / 0.226044 (0.286932) | 5.135313 / 2.268929 (2.866384) | 2.318173 / 55.444624 (-53.126451) | 1.996360 / 6.876477 (-4.880117) | 2.060150 / 2.142072 (-0.081922) | 0.853534 / 4.805227 (-3.951693) | 0.165586 / 6.500664 (-6.335078) | 0.062365 / 0.075469 (-0.013104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.178843 / 1.841788 (-0.662945) | 14.541639 / 8.074308 (6.467331) | 14.090782 / 10.191392 (3.899390) | 0.158717 / 0.680424 (-0.521707) | 0.028825 / 0.534201 (-0.505376) | 0.441427 / 0.579283 (-0.137856) | 0.439856 / 0.434364 (0.005492) | 0.530610 / 0.540337 (-0.009727) | 0.634044 / 1.386936 (-0.752892) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007502 / 0.011353 (-0.003851) | 0.005208 / 0.011008 (-0.005801) | 0.075020 / 0.038508 (0.036512) | 0.033297 / 0.023109 (0.010188) | 0.342218 / 0.275898 (0.066320) | 0.376716 / 0.323480 (0.053236) | 0.005906 / 0.007986 (-0.002080) | 0.005320 / 0.004328 (0.000992) | 0.073531 / 0.004250 (0.069281) | 0.049091 / 0.037052 (0.012039) | 0.344202 / 0.258489 (0.085713) | 0.380556 / 0.293841 (0.086715) | 0.037500 / 0.128546 (-0.091047) | 0.012404 / 0.075646 (-0.063242) | 0.087254 / 0.419271 (-0.332017) | 0.055145 / 0.043533 (0.011612) | 0.344112 / 0.255139 (0.088973) | 0.359052 / 0.283200 (0.075852) | 0.108337 / 0.141683 (-0.033345) | 1.450332 / 1.452155 (-0.001822) | 1.553607 / 1.492716 (0.060891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216335 / 0.018006 (0.198329) | 0.436813 / 0.000490 (0.436323) | 0.005055 / 0.000200 (0.004855) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030037 / 0.037411 (-0.007374) | 0.110854 / 0.014526 (0.096329) | 0.121967 / 0.176557 (-0.054589) | 0.174029 / 0.737135 (-0.563107) | 0.128340 / 0.296338 (-0.167998) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424463 / 0.215209 (0.209254) | 4.201822 / 2.077655 (2.124167) | 2.043075 / 1.504120 (0.538956) | 1.851841 / 1.541195 (0.310647) | 1.947790 / 1.468490 (0.479300) | 0.684110 / 4.584777 (-3.900667) | 3.763536 / 3.745712 (0.017824) | 3.106988 / 5.269862 (-2.162873) | 1.498305 / 4.565676 (-3.067372) | 0.085079 / 0.424275 (-0.339196) | 0.012241 / 0.007607 (0.004634) | 0.520877 / 0.226044 (0.294832) | 5.181455 / 2.268929 (2.912527) | 2.443038 / 55.444624 (-53.001586) | 2.130823 / 6.876477 (-4.745654) | 2.217901 / 2.142072 (0.075829) | 0.837116 / 4.805227 (-3.968111) | 0.166581 / 6.500664 (-6.334083) | 0.065510 / 0.075469 (-0.009959) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289317 / 1.841788 (-0.552471) | 15.122019 / 8.074308 (7.047710) | 13.919670 / 10.191392 (3.728278) | 0.150047 / 0.680424 (-0.530377) | 0.017612 / 0.534201 (-0.516589) | 0.426239 / 0.579283 (-0.153044) | 0.425686 / 0.434364 (-0.008678) | 0.521436 / 0.540337 (-0.018901) | 0.618217 / 1.386936 (-0.768719) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#879fc6d5186ce593fe819f1e9e67897a1873766b \"CML watermark\")\n", "We haven't updated the minimal version requirement for PyArrow in a while, so it's ok to make a bigger leap IMO, e.g., PyArrow 8.0 (Colab installs 9.0). With this change, we should also remove the PyArrow version check in `folder_based_builder.py`, and the ones in `table.py`/`arrow_dataset.py` regarding the `to_reader` API if we decide to bump PyArrow to version 8.0.", "I think it's a good opportunity to bump the version to 8.0 which offers higher performance anyway, I wouldn't bother trying to support 6.0.1 anymore. Only 1% of users based on 6.0.1 use the latest `datasets` version 2.10.1\r\n\r\nBumping to 8.0 if it sounds good to you", "Sure, it is OK for those other reasons. I would just not stress that the increase of the minimum version is to support pandas 2.0 though...", "If requiring min 8.0, do you know the percentage of people using 7.0 and latest datasets version?", "Around 10% of users have 7.0.0, and 25% among them use the latest datasets version", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006744 / 0.011353 (-0.004609) | 0.004585 / 0.011008 (-0.006423) | 0.097828 / 0.038508 (0.059320) | 0.028230 / 0.023109 (0.005121) | 0.302190 / 0.275898 (0.026292) | 0.335022 / 0.323480 (0.011542) | 0.005107 / 0.007986 (-0.002878) | 0.004648 / 0.004328 (0.000320) | 0.076842 / 0.004250 (0.072592) | 0.038291 / 0.037052 (0.001239) | 0.313286 / 0.258489 (0.054797) | 0.342534 / 0.293841 (0.048693) | 0.031325 / 0.128546 (-0.097221) | 0.011632 / 0.075646 (-0.064014) | 0.321879 / 0.419271 (-0.097392) | 0.042204 / 0.043533 (-0.001329) | 0.304442 / 0.255139 (0.049303) | 0.330912 / 0.283200 (0.047712) | 0.085446 / 0.141683 (-0.056237) | 1.469990 / 1.452155 (0.017835) | 1.551147 / 1.492716 (0.058431) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185961 / 0.018006 (0.167955) | 0.404675 / 0.000490 (0.404186) | 0.003212 / 0.000200 (0.003012) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023876 / 0.037411 (-0.013535) | 0.097820 / 0.014526 (0.083295) | 0.107382 / 0.176557 (-0.069174) | 0.167598 / 0.737135 (-0.569537) | 0.108789 / 0.296338 (-0.187550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455004 / 0.215209 (0.239795) | 4.529104 / 2.077655 (2.451449) | 2.180068 / 1.504120 (0.675948) | 1.982109 / 1.541195 (0.440914) | 2.041856 / 1.468490 (0.573366) | 0.702029 / 4.584777 (-3.882747) | 3.368613 / 3.745712 (-0.377099) | 1.932303 / 5.269862 (-3.337559) | 1.278340 / 4.565676 (-3.287336) | 0.082836 / 0.424275 (-0.341439) | 0.012349 / 0.007607 (0.004742) | 0.548197 / 0.226044 (0.322153) | 5.509982 / 2.268929 (3.241053) | 2.612889 / 55.444624 (-52.831736) | 2.278157 / 6.876477 (-4.598320) | 2.386923 / 2.142072 (0.244851) | 0.803332 / 4.805227 (-4.001896) | 0.151222 / 6.500664 (-6.349442) | 0.066673 / 0.075469 (-0.008796) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209453 / 1.841788 (-0.632335) | 13.649733 / 8.074308 (5.575424) | 14.065917 / 10.191392 (3.874525) | 0.128872 / 0.680424 (-0.551551) | 0.016773 / 0.534201 (-0.517428) | 0.385475 / 0.579283 (-0.193809) | 0.386208 / 0.434364 (-0.048156) | 0.475144 / 0.540337 (-0.065194) | 0.564183 / 1.386936 (-0.822753) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004433 / 0.011008 (-0.006575) | 0.076008 / 0.038508 (0.037500) | 0.027471 / 0.023109 (0.004362) | 0.339837 / 0.275898 (0.063939) | 0.376857 / 0.323480 (0.053377) | 0.004930 / 0.007986 (-0.003055) | 0.003312 / 0.004328 (-0.001016) | 0.075070 / 0.004250 (0.070820) | 0.035897 / 0.037052 (-0.001156) | 0.342398 / 0.258489 (0.083909) | 0.380202 / 0.293841 (0.086361) | 0.031781 / 0.128546 (-0.096766) | 0.011697 / 0.075646 (-0.063950) | 0.085926 / 0.419271 (-0.333345) | 0.041599 / 0.043533 (-0.001934) | 0.343098 / 0.255139 (0.087959) | 0.371275 / 0.283200 (0.088076) | 0.090489 / 0.141683 (-0.051194) | 1.483738 / 1.452155 (0.031584) | 1.554973 / 1.492716 (0.062256) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183703 / 0.018006 (0.165697) | 0.395105 / 0.000490 (0.394616) | 0.002162 / 0.000200 (0.001963) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025432 / 0.037411 (-0.011979) | 0.101322 / 0.014526 (0.086796) | 0.107839 / 0.176557 (-0.068718) | 0.160328 / 0.737135 (-0.576807) | 0.109899 / 0.296338 (-0.186440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448001 / 0.215209 (0.232792) | 4.485321 / 2.077655 (2.407666) | 2.157064 / 1.504120 (0.652944) | 1.966141 / 1.541195 (0.424947) | 2.032808 / 1.468490 (0.564318) | 0.705684 / 4.584777 (-3.879093) | 3.359802 / 3.745712 (-0.385910) | 2.694952 / 5.269862 (-2.574910) | 1.471309 / 4.565676 (-3.094368) | 0.084185 / 0.424275 (-0.340090) | 0.012330 / 0.007607 (0.004723) | 0.554083 / 0.226044 (0.328038) | 5.569137 / 2.268929 (3.300208) | 2.586009 / 55.444624 (-52.858615) | 2.234920 / 6.876477 (-4.641557) | 2.285128 / 2.142072 (0.143056) | 0.818825 / 4.805227 (-3.986402) | 0.152604 / 6.500664 (-6.348060) | 0.067722 / 0.075469 (-0.007747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305571 / 1.841788 (-0.536217) | 13.687471 / 8.074308 (5.613163) | 13.305401 / 10.191392 (3.114009) | 0.140477 / 0.680424 (-0.539947) | 0.018138 / 0.534201 (-0.516063) | 0.377255 / 0.579283 (-0.202028) | 0.379522 / 0.434364 (-0.054842) | 0.458489 / 0.540337 (-0.081849) | 0.543767 / 1.386936 (-0.843169) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#02570894db6ecc46bf25b7fa1cb1bcdc1dede853 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009606 / 0.011353 (-0.001747) | 0.006795 / 0.011008 (-0.004213) | 0.133738 / 0.038508 (0.095230) | 0.043379 / 0.023109 (0.020270) | 0.412917 / 0.275898 (0.137019) | 0.418790 / 0.323480 (0.095310) | 0.007290 / 0.007986 (-0.000696) | 0.004960 / 0.004328 (0.000632) | 0.095496 / 0.004250 (0.091246) | 0.057607 / 0.037052 (0.020555) | 0.402638 / 0.258489 (0.144149) | 0.436206 / 0.293841 (0.142365) | 0.056023 / 0.128546 (-0.072523) | 0.019909 / 0.075646 (-0.055737) | 0.463958 / 0.419271 (0.044687) | 0.064073 / 0.043533 (0.020541) | 0.398337 / 0.255139 (0.143198) | 0.421786 / 0.283200 (0.138586) | 0.131563 / 0.141683 (-0.010120) | 1.840217 / 1.452155 (0.388063) | 1.912013 / 1.492716 (0.419296) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230519 / 0.018006 (0.212513) | 0.550506 / 0.000490 (0.550017) | 0.003649 / 0.000200 (0.003449) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029713 / 0.037411 (-0.007698) | 0.129913 / 0.014526 (0.115387) | 0.131543 / 0.176557 (-0.045013) | 0.203571 / 0.737135 (-0.533565) | 0.141483 / 0.296338 (-0.154856) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.626383 / 0.215209 (0.411174) | 6.193043 / 2.077655 (4.115388) | 2.442728 / 1.504120 (0.938608) | 2.079049 / 1.541195 (0.537855) | 2.117761 / 1.468490 (0.649271) | 1.315296 / 4.584777 (-3.269481) | 5.643709 / 3.745712 (1.897997) | 5.245789 / 5.269862 (-0.024073) | 2.757442 / 4.565676 (-1.808235) | 0.151655 / 0.424275 (-0.272620) | 0.014686 / 0.007607 (0.007079) | 0.779937 / 0.226044 (0.553893) | 7.796685 / 2.268929 (5.527756) | 3.349580 / 55.444624 (-52.095045) | 2.493750 / 6.876477 (-4.382727) | 2.506200 / 2.142072 (0.364128) | 1.534964 / 4.805227 (-3.270263) | 0.260001 / 6.500664 (-6.240663) | 0.080543 / 0.075469 (0.005074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541940 / 1.841788 (-0.299848) | 17.851935 / 8.074308 (9.777627) | 22.418859 / 10.191392 (12.227467) | 0.258602 / 0.680424 (-0.421822) | 0.027679 / 0.534201 (-0.506522) | 0.548379 / 0.579283 (-0.030904) | 0.625505 / 0.434364 (0.191141) | 0.664074 / 0.540337 (0.123737) | 0.797418 / 1.386936 (-0.589518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009800 / 0.011353 (-0.001553) | 0.006178 / 0.011008 (-0.004830) | 0.105667 / 0.038508 (0.067159) | 0.039380 / 0.023109 (0.016271) | 0.419528 / 0.275898 (0.143630) | 0.469857 / 0.323480 (0.146377) | 0.006672 / 0.007986 (-0.001314) | 0.004745 / 0.004328 (0.000417) | 0.101647 / 0.004250 (0.097397) | 0.048531 / 0.037052 (0.011478) | 0.433364 / 0.258489 (0.174875) | 0.459719 / 0.293841 (0.165878) | 0.054291 / 0.128546 (-0.074256) | 0.020406 / 0.075646 (-0.055240) | 0.122321 / 0.419271 (-0.296951) | 0.059719 / 0.043533 (0.016186) | 0.416083 / 0.255139 (0.160944) | 0.455277 / 0.283200 (0.172077) | 0.119342 / 0.141683 (-0.022341) | 1.862544 / 1.452155 (0.410390) | 2.001428 / 1.492716 (0.508712) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240951 / 0.018006 (0.222945) | 0.516958 / 0.000490 (0.516468) | 0.000449 / 0.000200 (0.000249) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032725 / 0.037411 (-0.004686) | 0.130291 / 0.014526 (0.115765) | 0.139834 / 0.176557 (-0.036723) | 0.214995 / 0.737135 (-0.522140) | 0.150925 / 0.296338 (-0.145414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652062 / 0.215209 (0.436853) | 6.584447 / 2.077655 (4.506793) | 2.654838 / 1.504120 (1.150718) | 2.297209 / 1.541195 (0.756015) | 2.420394 / 1.468490 (0.951904) | 1.299285 / 4.584777 (-3.285492) | 5.605849 / 3.745712 (1.860137) | 3.166103 / 5.269862 (-2.103759) | 2.138123 / 4.565676 (-2.427554) | 0.152562 / 0.424275 (-0.271713) | 0.015499 / 0.007607 (0.007892) | 0.816300 / 0.226044 (0.590256) | 8.308746 / 2.268929 (6.039817) | 3.482982 / 55.444624 (-51.961642) | 2.689247 / 6.876477 (-4.187229) | 2.792728 / 2.142072 (0.650656) | 1.566320 / 4.805227 (-3.238907) | 0.264110 / 6.500664 (-6.236554) | 0.083652 / 0.075469 (0.008183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.643027 / 1.841788 (-0.198760) | 18.612349 / 8.074308 (10.538041) | 19.460644 / 10.191392 (9.269252) | 0.260795 / 0.680424 (-0.419629) | 0.026050 / 0.534201 (-0.508151) | 0.539750 / 0.579283 (-0.039533) | 0.620791 / 0.434364 (0.186428) | 0.645023 / 0.540337 (0.104686) | 0.765604 / 1.386936 (-0.621332) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e6dcf4c50e14ee6dbc6d763ed1b7ce3501460863 \"CML watermark\")\n", "ready for re-review :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006388 / 0.011353 (-0.004965) | 0.004469 / 0.011008 (-0.006540) | 0.097082 / 0.038508 (0.058573) | 0.028005 / 0.023109 (0.004895) | 0.364797 / 0.275898 (0.088899) | 0.399671 / 0.323480 (0.076191) | 0.005062 / 0.007986 (-0.002923) | 0.004580 / 0.004328 (0.000252) | 0.075670 / 0.004250 (0.071420) | 0.038328 / 0.037052 (0.001276) | 0.365948 / 0.258489 (0.107459) | 0.402631 / 0.293841 (0.108790) | 0.031378 / 0.128546 (-0.097168) | 0.011443 / 0.075646 (-0.064203) | 0.321590 / 0.419271 (-0.097682) | 0.042263 / 0.043533 (-0.001270) | 0.368238 / 0.255139 (0.113099) | 0.389928 / 0.283200 (0.106728) | 0.085203 / 0.141683 (-0.056480) | 1.462820 / 1.452155 (0.010665) | 1.529207 / 1.492716 (0.036490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197194 / 0.018006 (0.179188) | 0.410897 / 0.000490 (0.410407) | 0.003394 / 0.000200 (0.003194) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022911 / 0.037411 (-0.014500) | 0.097012 / 0.014526 (0.082486) | 0.102247 / 0.176557 (-0.074309) | 0.163363 / 0.737135 (-0.573772) | 0.106897 / 0.296338 (-0.189441) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416303 / 0.215209 (0.201094) | 4.159325 / 2.077655 (2.081671) | 1.844893 / 1.504120 (0.340773) | 1.646131 / 1.541195 (0.104936) | 1.706763 / 1.468490 (0.238273) | 0.699607 / 4.584777 (-3.885170) | 3.462048 / 3.745712 (-0.283664) | 1.939076 / 5.269862 (-3.330786) | 1.324744 / 4.565676 (-3.240932) | 0.082949 / 0.424275 (-0.341326) | 0.012327 / 0.007607 (0.004720) | 0.513812 / 0.226044 (0.287768) | 5.171021 / 2.268929 (2.902093) | 2.288039 / 55.444624 (-53.156585) | 1.957403 / 6.876477 (-4.919074) | 1.990060 / 2.142072 (-0.152013) | 0.805571 / 4.805227 (-3.999656) | 0.152641 / 6.500664 (-6.348023) | 0.068169 / 0.075469 (-0.007300) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.200624 / 1.841788 (-0.641164) | 13.836334 / 8.074308 (5.762026) | 14.065340 / 10.191392 (3.873948) | 0.143406 / 0.680424 (-0.537018) | 0.016709 / 0.534201 (-0.517492) | 0.380080 / 0.579283 (-0.199204) | 0.398414 / 0.434364 (-0.035950) | 0.479192 / 0.540337 (-0.061145) | 0.572508 / 1.386936 (-0.814428) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006622 / 0.011353 (-0.004731) | 0.004511 / 0.011008 (-0.006497) | 0.076454 / 0.038508 (0.037946) | 0.027431 / 0.023109 (0.004322) | 0.339041 / 0.275898 (0.063143) | 0.375691 / 0.323480 (0.052211) | 0.004854 / 0.007986 (-0.003131) | 0.004654 / 0.004328 (0.000325) | 0.075300 / 0.004250 (0.071049) | 0.036469 / 0.037052 (-0.000583) | 0.341357 / 0.258489 (0.082868) | 0.381561 / 0.293841 (0.087720) | 0.031754 / 0.128546 (-0.096792) | 0.011544 / 0.075646 (-0.064102) | 0.085956 / 0.419271 (-0.333315) | 0.041704 / 0.043533 (-0.001828) | 0.340088 / 0.255139 (0.084950) | 0.364037 / 0.283200 (0.080838) | 0.091016 / 0.141683 (-0.050667) | 1.483515 / 1.452155 (0.031360) | 1.562878 / 1.492716 (0.070162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228019 / 0.018006 (0.210013) | 0.404809 / 0.000490 (0.404320) | 0.000384 / 0.000200 (0.000184) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025230 / 0.037411 (-0.012181) | 0.099790 / 0.014526 (0.085264) | 0.107923 / 0.176557 (-0.068634) | 0.157651 / 0.737135 (-0.579484) | 0.112525 / 0.296338 (-0.183813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440360 / 0.215209 (0.225151) | 4.387749 / 2.077655 (2.310094) | 2.077592 / 1.504120 (0.573472) | 1.872532 / 1.541195 (0.331337) | 1.941607 / 1.468490 (0.473117) | 0.699394 / 4.584777 (-3.885383) | 3.411210 / 3.745712 (-0.334502) | 1.901816 / 5.269862 (-3.368046) | 1.177042 / 4.565676 (-3.388634) | 0.083536 / 0.424275 (-0.340739) | 0.012418 / 0.007607 (0.004811) | 0.548463 / 0.226044 (0.322419) | 5.487107 / 2.268929 (3.218178) | 2.548076 / 55.444624 (-52.896548) | 2.215012 / 6.876477 (-4.661465) | 2.253472 / 2.142072 (0.111400) | 0.812925 / 4.805227 (-3.992302) | 0.152935 / 6.500664 (-6.347729) | 0.068144 / 0.075469 (-0.007325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267914 / 1.841788 (-0.573873) | 14.015185 / 8.074308 (5.940877) | 13.153967 / 10.191392 (2.962575) | 0.140666 / 0.680424 (-0.539758) | 0.016718 / 0.534201 (-0.517483) | 0.383411 / 0.579283 (-0.195872) | 0.395424 / 0.434364 (-0.038940) | 0.466069 / 0.540337 (-0.074269) | 0.553825 / 1.386936 (-0.833111) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#14568bf072b38e3b295f29774c874c8e78b9fe37 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007463 / 0.011353 (-0.003890) | 0.005017 / 0.011008 (-0.005991) | 0.098777 / 0.038508 (0.060269) | 0.033859 / 0.023109 (0.010750) | 0.298569 / 0.275898 (0.022670) | 0.343717 / 0.323480 (0.020237) | 0.005806 / 0.007986 (-0.002180) | 0.005403 / 0.004328 (0.001074) | 0.075840 / 0.004250 (0.071590) | 0.046539 / 0.037052 (0.009487) | 0.300058 / 0.258489 (0.041569) | 0.345036 / 0.293841 (0.051195) | 0.036258 / 0.128546 (-0.092288) | 0.011992 / 0.075646 (-0.063654) | 0.334986 / 0.419271 (-0.084286) | 0.050427 / 0.043533 (0.006894) | 0.295319 / 0.255139 (0.040180) | 0.318980 / 0.283200 (0.035780) | 0.098407 / 0.141683 (-0.043276) | 1.437626 / 1.452155 (-0.014529) | 1.562548 / 1.492716 (0.069832) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231502 / 0.018006 (0.213496) | 0.441550 / 0.000490 (0.441060) | 0.005863 / 0.000200 (0.005663) | 0.000724 / 0.000054 (0.000670) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027501 / 0.037411 (-0.009911) | 0.111490 / 0.014526 (0.096964) | 0.117503 / 0.176557 (-0.059054) | 0.173849 / 0.737135 (-0.563286) | 0.124521 / 0.296338 (-0.171818) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419266 / 0.215209 (0.204057) | 4.170337 / 2.077655 (2.092683) | 2.015883 / 1.504120 (0.511763) | 1.832683 / 1.541195 (0.291488) | 1.950195 / 1.468490 (0.481705) | 0.698150 / 4.584777 (-3.886627) | 3.775601 / 3.745712 (0.029889) | 2.094581 / 5.269862 (-3.175281) | 1.325437 / 4.565676 (-3.240240) | 0.085382 / 0.424275 (-0.338894) | 0.012151 / 0.007607 (0.004544) | 0.526441 / 0.226044 (0.300397) | 5.256124 / 2.268929 (2.987196) | 2.488408 / 55.444624 (-52.956216) | 2.157228 / 6.876477 (-4.719249) | 2.228991 / 2.142072 (0.086919) | 0.837002 / 4.805227 (-3.968225) | 0.167520 / 6.500664 (-6.333144) | 0.066435 / 0.075469 (-0.009035) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.174544 / 1.841788 (-0.667243) | 14.684207 / 8.074308 (6.609899) | 14.494676 / 10.191392 (4.303284) | 0.143423 / 0.680424 (-0.537001) | 0.017289 / 0.534201 (-0.516912) | 0.424727 / 0.579283 (-0.154556) | 0.417077 / 0.434364 (-0.017287) | 0.498955 / 0.540337 (-0.041383) | 0.584838 / 1.386936 (-0.802098) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007666 / 0.011353 (-0.003687) | 0.005269 / 0.011008 (-0.005739) | 0.073548 / 0.038508 (0.035040) | 0.033683 / 0.023109 (0.010573) | 0.342646 / 0.275898 (0.066747) | 0.380948 / 0.323480 (0.057468) | 0.005737 / 0.007986 (-0.002248) | 0.005366 / 0.004328 (0.001038) | 0.073228 / 0.004250 (0.068978) | 0.050065 / 0.037052 (0.013013) | 0.348593 / 0.258489 (0.090104) | 0.393930 / 0.293841 (0.100089) | 0.037411 / 0.128546 (-0.091135) | 0.012476 / 0.075646 (-0.063170) | 0.084884 / 0.419271 (-0.334387) | 0.049368 / 0.043533 (0.005835) | 0.343142 / 0.255139 (0.088003) | 0.362828 / 0.283200 (0.079628) | 0.102962 / 0.141683 (-0.038721) | 1.505703 / 1.452155 (0.053549) | 1.580695 / 1.492716 (0.087979) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207621 / 0.018006 (0.189615) | 0.437678 / 0.000490 (0.437188) | 0.003931 / 0.000200 (0.003731) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029079 / 0.037411 (-0.008332) | 0.108600 / 0.014526 (0.094074) | 0.124787 / 0.176557 (-0.051770) | 0.173354 / 0.737135 (-0.563781) | 0.126124 / 0.296338 (-0.170214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427911 / 0.215209 (0.212702) | 4.254227 / 2.077655 (2.176572) | 2.052142 / 1.504120 (0.548022) | 1.857042 / 1.541195 (0.315848) | 1.965244 / 1.468490 (0.496754) | 0.707994 / 4.584777 (-3.876783) | 3.807593 / 3.745712 (0.061880) | 3.387588 / 5.269862 (-1.882274) | 1.844853 / 4.565676 (-2.720824) | 0.088548 / 0.424275 (-0.335727) | 0.012398 / 0.007607 (0.004791) | 0.565896 / 0.226044 (0.339851) | 5.228024 / 2.268929 (2.959095) | 2.467220 / 55.444624 (-52.977405) | 2.144413 / 6.876477 (-4.732064) | 2.214049 / 2.142072 (0.071977) | 0.869381 / 4.805227 (-3.935846) | 0.170991 / 6.500664 (-6.329673) | 0.064932 / 0.075469 (-0.010537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.246661 / 1.841788 (-0.595127) | 14.902743 / 8.074308 (6.828435) | 13.264294 / 10.191392 (3.072902) | 0.165328 / 0.680424 (-0.515095) | 0.017567 / 0.534201 (-0.516634) | 0.425491 / 0.579283 (-0.153792) | 0.427327 / 0.434364 (-0.007037) | 0.526475 / 0.540337 (-0.013862) | 0.627309 / 1.386936 (-0.759627) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dd31bce76b554447bccb2b1447440e1f8ddba035 \"CML watermark\")\n" ]
1,678,195,913,000
1,678,284,087,000
1,678,283,662,000
MEMBER
null
Fix those for Pandas 2.0 (tested [here](https://github.com/huggingface/datasets/actions/runs/4346221280/jobs/7592010397) with pandas==2.0.0.rc0): ```python =========================== short test summary info ============================ FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_in_memory - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'. A suitable version of pyarrow or fastparquet is required for parquet support. Trying to import the above resulted in these errors: - Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed). - Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet. FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_on_disk - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'. A suitable version of pyarrow or fastparquet is required for parquet support. Trying to import the above resulted in these errors: - Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed). - Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet. ===== 2 failed, 2137 passed, 18 skipped, 32 warnings in 212.76s (0:03:32) ====== ``` EDIT: also for performance - with 8.0 we can use `.to_reader()`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5620/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5620", "html_url": "https://github.com/huggingface/datasets/pull/5620", "diff_url": "https://github.com/huggingface/datasets/pull/5620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5620.patch", "merged_at": "2023-03-08T13:54:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/5619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5619/comments
https://api.github.com/repos/huggingface/datasets/issues/5619/events
https://github.com/huggingface/datasets/pull/5619
1,613,439,709
PR_kwDODunzps5LeaYP
5,619
unpin fsspec
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009954 / 0.011353 (-0.001398) | 0.005468 / 0.011008 (-0.005541) | 0.101228 / 0.038508 (0.062720) | 0.037878 / 0.023109 (0.014769) | 0.305635 / 0.275898 (0.029737) | 0.391672 / 0.323480 (0.068192) | 0.008893 / 0.007986 (0.000908) | 0.005861 / 0.004328 (0.001533) | 0.076940 / 0.004250 (0.072689) | 0.046242 / 0.037052 (0.009190) | 0.324033 / 0.258489 (0.065544) | 0.383306 / 0.293841 (0.089465) | 0.039298 / 0.128546 (-0.089249) | 0.012187 / 0.075646 (-0.063459) | 0.336774 / 0.419271 (-0.082498) | 0.053493 / 0.043533 (0.009960) | 0.303381 / 0.255139 (0.048242) | 0.323494 / 0.283200 (0.040295) | 0.118613 / 0.141683 (-0.023070) | 1.463430 / 1.452155 (0.011275) | 1.549856 / 1.492716 (0.057139) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289264 / 0.018006 (0.271258) | 0.520348 / 0.000490 (0.519858) | 0.004543 / 0.000200 (0.004343) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028183 / 0.037411 (-0.009229) | 0.107869 / 0.014526 (0.093343) | 0.124019 / 0.176557 (-0.052537) | 0.167769 / 0.737135 (-0.569367) | 0.130304 / 0.296338 (-0.166034) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402296 / 0.215209 (0.187087) | 4.018884 / 2.077655 (1.941229) | 1.834050 / 1.504120 (0.329930) | 1.649974 / 1.541195 (0.108779) | 1.741697 / 1.468490 (0.273207) | 0.684354 / 4.584777 (-3.900423) | 3.778213 / 3.745712 (0.032501) | 2.158086 / 5.269862 (-3.111775) | 1.472671 / 4.565676 (-3.093006) | 0.083912 / 0.424275 (-0.340363) | 0.012285 / 0.007607 (0.004678) | 0.501689 / 0.226044 (0.275645) | 5.014722 / 2.268929 (2.745794) | 2.310722 / 55.444624 (-53.133902) | 1.983214 / 6.876477 (-4.893262) | 2.154518 / 2.142072 (0.012446) | 0.821277 / 4.805227 (-3.983950) | 0.164434 / 6.500664 (-6.336231) | 0.062568 / 0.075469 (-0.012901) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224338 / 1.841788 (-0.617450) | 14.981623 / 8.074308 (6.907315) | 14.296356 / 10.191392 (4.104964) | 0.193554 / 0.680424 (-0.486870) | 0.028511 / 0.534201 (-0.505690) | 0.437649 / 0.579283 (-0.141634) | 0.448934 / 0.434364 (0.014570) | 0.552624 / 0.540337 (0.012287) | 0.654268 / 1.386936 (-0.732668) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007772 / 0.011353 (-0.003581) | 0.005534 / 0.011008 (-0.005474) | 0.074347 / 0.038508 (0.035839) | 0.034486 / 0.023109 (0.011376) | 0.343430 / 0.275898 (0.067532) | 0.385778 / 0.323480 (0.062298) | 0.006424 / 0.007986 (-0.001562) | 0.004241 / 0.004328 (-0.000087) | 0.072839 / 0.004250 (0.068589) | 0.055523 / 0.037052 (0.018471) | 0.342778 / 0.258489 (0.084289) | 0.389961 / 0.293841 (0.096120) | 0.037238 / 0.128546 (-0.091308) | 0.012450 / 0.075646 (-0.063197) | 0.085282 / 0.419271 (-0.333990) | 0.049678 / 0.043533 (0.006146) | 0.345300 / 0.255139 (0.090161) | 0.365220 / 0.283200 (0.082020) | 0.109257 / 0.141683 (-0.032426) | 1.480284 / 1.452155 (0.028129) | 1.627881 / 1.492716 (0.135165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.323330 / 0.018006 (0.305324) | 0.530824 / 0.000490 (0.530334) | 0.000463 / 0.000200 (0.000263) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032398 / 0.037411 (-0.005013) | 0.115889 / 0.014526 (0.101363) | 0.131093 / 0.176557 (-0.045464) | 0.180757 / 0.737135 (-0.556379) | 0.134395 / 0.296338 (-0.161943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423931 / 0.215209 (0.208722) | 4.238207 / 2.077655 (2.160553) | 2.075721 / 1.504120 (0.571602) | 1.887752 / 1.541195 (0.346557) | 2.055054 / 1.468490 (0.586564) | 0.703145 / 4.584777 (-3.881632) | 3.937120 / 3.745712 (0.191408) | 3.748550 / 5.269862 (-1.521311) | 1.562849 / 4.565676 (-3.002827) | 0.087695 / 0.424275 (-0.336580) | 0.012614 / 0.007607 (0.005007) | 0.523901 / 0.226044 (0.297856) | 5.230210 / 2.268929 (2.961282) | 2.592667 / 55.444624 (-52.851958) | 2.345662 / 6.876477 (-4.530815) | 2.475388 / 2.142072 (0.333316) | 0.836443 / 4.805227 (-3.968784) | 0.170304 / 6.500664 (-6.330360) | 0.067741 / 0.075469 (-0.007729) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255171 / 1.841788 (-0.586617) | 16.312856 / 8.074308 (8.238548) | 13.184770 / 10.191392 (2.993378) | 0.145557 / 0.680424 (-0.534867) | 0.017723 / 0.534201 (-0.516478) | 0.423447 / 0.579283 (-0.155836) | 0.423063 / 0.434364 (-0.011301) | 0.494159 / 0.540337 (-0.046179) | 0.589590 / 1.386936 (-0.797346) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ea6f1db3f80eb3bb7ac6f252c2cd5bd97537c01 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012068 / 0.011353 (0.000715) | 0.006127 / 0.011008 (-0.004881) | 0.112550 / 0.038508 (0.074042) | 0.043201 / 0.023109 (0.020092) | 0.346666 / 0.275898 (0.070768) | 0.413852 / 0.323480 (0.090372) | 0.009342 / 0.007986 (0.001356) | 0.006302 / 0.004328 (0.001974) | 0.086901 / 0.004250 (0.082650) | 0.053992 / 0.037052 (0.016940) | 0.362192 / 0.258489 (0.103703) | 0.409867 / 0.293841 (0.116026) | 0.046124 / 0.128546 (-0.082422) | 0.014139 / 0.075646 (-0.061507) | 0.386386 / 0.419271 (-0.032886) | 0.058465 / 0.043533 (0.014932) | 0.344832 / 0.255139 (0.089693) | 0.370684 / 0.283200 (0.087485) | 0.122886 / 0.141683 (-0.018796) | 1.724013 / 1.452155 (0.271858) | 1.775756 / 1.492716 (0.283039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220289 / 0.018006 (0.202283) | 0.493585 / 0.000490 (0.493096) | 0.001970 / 0.000200 (0.001770) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030763 / 0.037411 (-0.006649) | 0.128237 / 0.014526 (0.113711) | 0.138364 / 0.176557 (-0.038192) | 0.188115 / 0.737135 (-0.549021) | 0.145367 / 0.296338 (-0.150972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452487 / 0.215209 (0.237277) | 4.592728 / 2.077655 (2.515074) | 2.075712 / 1.504120 (0.571592) | 1.845424 / 1.541195 (0.304229) | 1.956400 / 1.468490 (0.487910) | 0.808387 / 4.584777 (-3.776390) | 4.483678 / 3.745712 (0.737966) | 3.870287 / 5.269862 (-1.399574) | 2.151205 / 4.565676 (-2.414471) | 0.098123 / 0.424275 (-0.326152) | 0.014139 / 0.007607 (0.006531) | 0.577775 / 0.226044 (0.351730) | 5.785545 / 2.268929 (3.516616) | 2.614418 / 55.444624 (-52.830206) | 2.312136 / 6.876477 (-4.564341) | 2.364189 / 2.142072 (0.222117) | 0.970028 / 4.805227 (-3.835199) | 0.189592 / 6.500664 (-6.311072) | 0.072883 / 0.075469 (-0.002586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.414252 / 1.841788 (-0.427535) | 17.518307 / 8.074308 (9.443999) | 16.053748 / 10.191392 (5.862356) | 0.215297 / 0.680424 (-0.465127) | 0.033947 / 0.534201 (-0.500253) | 0.525794 / 0.579283 (-0.053489) | 0.514676 / 0.434364 (0.080312) | 0.595066 / 0.540337 (0.054728) | 0.689404 / 1.386936 (-0.697532) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008185 / 0.011353 (-0.003168) | 0.005776 / 0.011008 (-0.005232) | 0.084919 / 0.038508 (0.046411) | 0.037575 / 0.023109 (0.014466) | 0.401192 / 0.275898 (0.125294) | 0.443920 / 0.323480 (0.120440) | 0.006446 / 0.007986 (-0.001540) | 0.004428 / 0.004328 (0.000099) | 0.084013 / 0.004250 (0.079763) | 0.052013 / 0.037052 (0.014961) | 0.398429 / 0.258489 (0.139940) | 0.455676 / 0.293841 (0.161836) | 0.041568 / 0.128546 (-0.086978) | 0.013631 / 0.075646 (-0.062015) | 0.098709 / 0.419271 (-0.320563) | 0.055889 / 0.043533 (0.012356) | 0.402002 / 0.255139 (0.146863) | 0.424248 / 0.283200 (0.141049) | 0.113288 / 0.141683 (-0.028395) | 1.672214 / 1.452155 (0.220059) | 1.792940 / 1.492716 (0.300223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211847 / 0.018006 (0.193841) | 0.486711 / 0.000490 (0.486221) | 0.002907 / 0.000200 (0.002707) | 0.000118 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032931 / 0.037411 (-0.004480) | 0.142073 / 0.014526 (0.127547) | 0.142872 / 0.176557 (-0.033685) | 0.202612 / 0.737135 (-0.534523) | 0.154390 / 0.296338 (-0.141949) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488682 / 0.215209 (0.273473) | 4.755805 / 2.077655 (2.678150) | 2.348778 / 1.504120 (0.844658) | 2.144992 / 1.541195 (0.603797) | 2.245654 / 1.468490 (0.777164) | 0.792690 / 4.584777 (-3.792087) | 4.569190 / 3.745712 (0.823478) | 3.919317 / 5.269862 (-1.350545) | 2.140302 / 4.565676 (-2.425374) | 0.096430 / 0.424275 (-0.327845) | 0.014551 / 0.007607 (0.006944) | 0.605138 / 0.226044 (0.379094) | 5.989470 / 2.268929 (3.720542) | 2.915525 / 55.444624 (-52.529099) | 2.516243 / 6.876477 (-4.360234) | 2.673114 / 2.142072 (0.531041) | 0.932330 / 4.805227 (-3.872897) | 0.191456 / 6.500664 (-6.309209) | 0.073887 / 0.075469 (-0.001582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455552 / 1.841788 (-0.386236) | 17.824864 / 8.074308 (9.750556) | 15.764150 / 10.191392 (5.572758) | 0.184935 / 0.680424 (-0.495489) | 0.020552 / 0.534201 (-0.513649) | 0.486816 / 0.579283 (-0.092467) | 0.489006 / 0.434364 (0.054642) | 0.609826 / 0.540337 (0.069488) | 0.721313 / 1.386936 (-0.665623) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a0a35c5fa84a8a7df656c1f5b0a7266126fa9b75 \"CML watermark\")\n" ]
1,678,195,361,000
1,678,196,821,000
1,678,196,342,000
MEMBER
null
close https://github.com/huggingface/datasets/issues/5618
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5619/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5619", "html_url": "https://github.com/huggingface/datasets/pull/5619", "diff_url": "https://github.com/huggingface/datasets/pull/5619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5619.patch", "merged_at": "2023-03-07T13:39:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/5618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5618/comments
https://api.github.com/repos/huggingface/datasets/issues/5618/events
https://github.com/huggingface/datasets/issues/5618
1,612,977,934
I_kwDODunzps5gJBcO
5,618
Unpin fsspec < 2023.3.0 once issue fixed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,678,178,511,000
1,678,196,343,000
1,678,196,343,000
MEMBER
null
Unpin `fsspec` upper version once root cause of our CI break is fixed. See: - #5614
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5618/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5617/comments
https://api.github.com/repos/huggingface/datasets/issues/5617/events
https://github.com/huggingface/datasets/pull/5617
1,612,947,422
PR_kwDODunzps5LcvI-
5,617
Fix CI by temporarily pinning fsspec < 2023.3.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008771 / 0.011353 (-0.002582) | 0.004665 / 0.011008 (-0.006343) | 0.101645 / 0.038508 (0.063137) | 0.030190 / 0.023109 (0.007081) | 0.298581 / 0.275898 (0.022683) | 0.371206 / 0.323480 (0.047727) | 0.007272 / 0.007986 (-0.000714) | 0.003432 / 0.004328 (-0.000896) | 0.078645 / 0.004250 (0.074395) | 0.037640 / 0.037052 (0.000588) | 0.314014 / 0.258489 (0.055525) | 0.345682 / 0.293841 (0.051841) | 0.033675 / 0.128546 (-0.094871) | 0.011513 / 0.075646 (-0.064134) | 0.320683 / 0.419271 (-0.098589) | 0.041633 / 0.043533 (-0.001900) | 0.302697 / 0.255139 (0.047558) | 0.323560 / 0.283200 (0.040361) | 0.089309 / 0.141683 (-0.052374) | 1.477570 / 1.452155 (0.025415) | 1.528004 / 1.492716 (0.035287) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184710 / 0.018006 (0.166704) | 0.412794 / 0.000490 (0.412305) | 0.001421 / 0.000200 (0.001221) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023133 / 0.037411 (-0.014278) | 0.099492 / 0.014526 (0.084967) | 0.104806 / 0.176557 (-0.071751) | 0.150765 / 0.737135 (-0.586370) | 0.110127 / 0.296338 (-0.186211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438642 / 0.215209 (0.223433) | 4.349753 / 2.077655 (2.272098) | 2.178754 / 1.504120 (0.674634) | 1.952839 / 1.541195 (0.411645) | 1.840574 / 1.468490 (0.372084) | 0.694016 / 4.584777 (-3.890761) | 3.375186 / 3.745712 (-0.370526) | 1.892391 / 5.269862 (-3.377470) | 1.177643 / 4.565676 (-3.388033) | 0.082328 / 0.424275 (-0.341947) | 0.012280 / 0.007607 (0.004673) | 0.534478 / 0.226044 (0.308434) | 5.377043 / 2.268929 (3.108114) | 2.645273 / 55.444624 (-52.799351) | 2.336391 / 6.876477 (-4.540086) | 2.387917 / 2.142072 (0.245845) | 0.814399 / 4.805227 (-3.990828) | 0.149226 / 6.500664 (-6.351438) | 0.066614 / 0.075469 (-0.008855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205467 / 1.841788 (-0.636321) | 13.857481 / 8.074308 (5.783173) | 14.269958 / 10.191392 (4.078566) | 0.152199 / 0.680424 (-0.528225) | 0.029083 / 0.534201 (-0.505118) | 0.397590 / 0.579283 (-0.181693) | 0.410587 / 0.434364 (-0.023777) | 0.480479 / 0.540337 (-0.059858) | 0.576014 / 1.386936 (-0.810922) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006956 / 0.011353 (-0.004397) | 0.004914 / 0.011008 (-0.006094) | 0.077571 / 0.038508 (0.039063) | 0.028309 / 0.023109 (0.005200) | 0.344523 / 0.275898 (0.068625) | 0.383039 / 0.323480 (0.059560) | 0.005202 / 0.007986 (-0.002783) | 0.003513 / 0.004328 (-0.000816) | 0.076393 / 0.004250 (0.072142) | 0.042035 / 0.037052 (0.004982) | 0.342950 / 0.258489 (0.084461) | 0.387432 / 0.293841 (0.093591) | 0.032267 / 0.128546 (-0.096280) | 0.011914 / 0.075646 (-0.063732) | 0.087140 / 0.419271 (-0.332131) | 0.042624 / 0.043533 (-0.000909) | 0.342391 / 0.255139 (0.087253) | 0.367016 / 0.283200 (0.083817) | 0.091757 / 0.141683 (-0.049926) | 1.515845 / 1.452155 (0.063690) | 1.607929 / 1.492716 (0.115213) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234461 / 0.018006 (0.216455) | 0.420430 / 0.000490 (0.419941) | 0.000403 / 0.000200 (0.000203) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026639 / 0.037411 (-0.010772) | 0.101860 / 0.014526 (0.087334) | 0.109696 / 0.176557 (-0.066860) | 0.160902 / 0.737135 (-0.576233) | 0.112431 / 0.296338 (-0.183907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438444 / 0.215209 (0.223235) | 4.378881 / 2.077655 (2.301226) | 2.063975 / 1.504120 (0.559855) | 1.863069 / 1.541195 (0.321874) | 1.955684 / 1.468490 (0.487193) | 0.694106 / 4.584777 (-3.890671) | 3.467683 / 3.745712 (-0.278029) | 2.882441 / 5.269862 (-2.387421) | 1.484533 / 4.565676 (-3.081143) | 0.082682 / 0.424275 (-0.341593) | 0.012597 / 0.007607 (0.004990) | 0.539219 / 0.226044 (0.313174) | 5.384838 / 2.268929 (3.115909) | 2.528273 / 55.444624 (-52.916351) | 2.190332 / 6.876477 (-4.686145) | 2.252573 / 2.142072 (0.110500) | 0.801047 / 4.805227 (-4.004180) | 0.151082 / 6.500664 (-6.349582) | 0.067564 / 0.075469 (-0.007905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306469 / 1.841788 (-0.535319) | 14.220154 / 8.074308 (6.145846) | 13.300979 / 10.191392 (3.109586) | 0.153827 / 0.680424 (-0.526597) | 0.016818 / 0.534201 (-0.517383) | 0.383528 / 0.579283 (-0.195755) | 0.393970 / 0.434364 (-0.040394) | 0.468395 / 0.540337 (-0.071943) | 0.558748 / 1.386936 (-0.828188) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#824860ca204a3bd84a7d63f71df5df4c56c2432f \"CML watermark\")\n" ]
1,678,177,100,000
1,678,178,695,000
1,678,178,248,000
MEMBER
null
As a hotfix for our CI, temporarily pin `fsspec`: Fix #5616. Until root cause is fixed, see: - #5614
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5617/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5617", "html_url": "https://github.com/huggingface/datasets/pull/5617", "diff_url": "https://github.com/huggingface/datasets/pull/5617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5617.patch", "merged_at": "2023-03-07T08:37:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/5616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5616/comments
https://api.github.com/repos/huggingface/datasets/issues/5616/events
https://github.com/huggingface/datasets/issues/5616
1,612,932,508
I_kwDODunzps5gI2Wc
5,616
CI is broken after fsspec-2023.3.0 release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
[]
1,678,176,399,000
1,678,178,249,000
1,678,178,249,000
MEMBER
null
As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release: ``` FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677.1887748, 'islink': False, 'mode': 33188, 'uid': 1001, 'gid': 123, 'mtime': 1678175677.1887748, 'ino': 286957, 'nlink': 1} != 'file.txt' Full diff: [ - 'file.txt', + {'created': 1678175677.1887748, + 'gid': 123, + 'ino': 286957, + 'islink': False, + 'mode': 33188, + 'mtime': 1678175677.1887748, + 'name': 'file.txt', + 'nlink': 1, + 'size': 70, + 'type': 'file', + 'uid': 1001}, ] ``` Also: ``` FAILED tests/test_filesystem.py::test_compression_filesystems[GzipFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[Lz4FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[XzFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[ZstdFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] ===== 5 failed, 2134 passed, 18 skipped, 38 warnings in 157.21s (0:02:37) ====== ``` See: - fsspec/filesystem_spec#1205
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5616/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5615/comments
https://api.github.com/repos/huggingface/datasets/issues/5615/events
https://github.com/huggingface/datasets/issues/5615
1,612,552,653
I_kwDODunzps5gHZnN
5,615
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
{ "login": "zsaladin", "id": 6466389, "node_id": "MDQ6VXNlcjY0NjYzODk=", "avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zsaladin", "html_url": "https://github.com/zsaladin", "followers_url": "https://api.github.com/users/zsaladin/followers", "following_url": "https://api.github.com/users/zsaladin/following{/other_user}", "gists_url": "https://api.github.com/users/zsaladin/gists{/gist_id}", "starred_url": "https://api.github.com/users/zsaladin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zsaladin/subscriptions", "organizations_url": "https://api.github.com/users/zsaladin/orgs", "repos_url": "https://api.github.com/users/zsaladin/repos", "events_url": "https://api.github.com/users/zsaladin/events{/privacy}", "received_events_url": "https://api.github.com/users/zsaladin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
[ "Hi! You can use `concatenate_datasets([ids1, ids2], axis=1)` to do this." ]
1,678,153,920,000
1,678,375,445,000
1,678,375,434,000
NONE
null
### Describe the bug `IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter. The method seems to accept only eager evaluated values. https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391 I wrote codes below to make it. ```py def add_column(dataset: IterableDataset, name: str, add_dataset: IterableDataset, key: str) -> IterableDataset: iter_add_dataset = iter(add_dataset) def add_column_fn(example): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: next(iter_add_dataset)[key]} return dataset.map(add_column_fn) ``` Is there other way to do it? Or is it intended? ### Steps to reproduce the bug Thie codes below occurs `NotImplementedError` ```py from datasets import IterableDataset def gen(num): yield {f"col{num}": 1} yield {f"col{num}": 2} yield {f"col{num}": 3} ids1 = IterableDataset.from_generator(gen, gen_kwargs={"num": 1}) ids2 = IterableDataset.from_generator(gen, gen_kwargs={"num": 2}) new_ids = ids1.add_column("new_col", ids1) for row in new_ids: print(row) ``` ### Expected behavior `IterableDataset.add_column` is able to task `IterableDataset` and lazy evaluated values as a parameter since IterableDataset is lazy evalued. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.7 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5615/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5614/comments
https://api.github.com/repos/huggingface/datasets/issues/5614/events
https://github.com/huggingface/datasets/pull/5614
1,611,896,357
PR_kwDODunzps5LZOTd
5,614
Fix archive fs test
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008664 / 0.011353 (-0.002689) | 0.004622 / 0.011008 (-0.006387) | 0.101716 / 0.038508 (0.063208) | 0.030044 / 0.023109 (0.006935) | 0.298476 / 0.275898 (0.022578) | 0.360873 / 0.323480 (0.037393) | 0.007012 / 0.007986 (-0.000974) | 0.003409 / 0.004328 (-0.000919) | 0.077731 / 0.004250 (0.073480) | 0.035493 / 0.037052 (-0.001560) | 0.311474 / 0.258489 (0.052985) | 0.357276 / 0.293841 (0.063435) | 0.033909 / 0.128546 (-0.094638) | 0.011315 / 0.075646 (-0.064332) | 0.323149 / 0.419271 (-0.096122) | 0.040678 / 0.043533 (-0.002855) | 0.298487 / 0.255139 (0.043348) | 0.323107 / 0.283200 (0.039907) | 0.086641 / 0.141683 (-0.055042) | 1.452905 / 1.452155 (0.000750) | 1.510953 / 1.492716 (0.018237) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190607 / 0.018006 (0.172601) | 0.409786 / 0.000490 (0.409297) | 0.000818 / 0.000200 (0.000618) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023267 / 0.037411 (-0.014144) | 0.095390 / 0.014526 (0.080864) | 0.104381 / 0.176557 (-0.072175) | 0.150735 / 0.737135 (-0.586401) | 0.106876 / 0.296338 (-0.189462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434259 / 0.215209 (0.219050) | 4.326978 / 2.077655 (2.249323) | 2.036690 / 1.504120 (0.532570) | 1.836459 / 1.541195 (0.295264) | 1.904003 / 1.468490 (0.435513) | 0.697265 / 4.584777 (-3.887512) | 3.435911 / 3.745712 (-0.309802) | 3.240918 / 5.269862 (-2.028944) | 1.629220 / 4.565676 (-2.936456) | 0.083158 / 0.424275 (-0.341117) | 0.012604 / 0.007607 (0.004997) | 0.539818 / 0.226044 (0.313773) | 5.397860 / 2.268929 (3.128932) | 2.483890 / 55.444624 (-52.960735) | 2.132404 / 6.876477 (-4.744072) | 2.162583 / 2.142072 (0.020510) | 0.817773 / 4.805227 (-3.987454) | 0.151677 / 6.500664 (-6.348987) | 0.066569 / 0.075469 (-0.008900) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243449 / 1.841788 (-0.598339) | 13.699854 / 8.074308 (5.625546) | 13.930979 / 10.191392 (3.739587) | 0.165344 / 0.680424 (-0.515079) | 0.028910 / 0.534201 (-0.505291) | 0.396201 / 0.579283 (-0.183082) | 0.404448 / 0.434364 (-0.029916) | 0.482031 / 0.540337 (-0.058306) | 0.570023 / 1.386936 (-0.816913) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006785 / 0.011353 (-0.004568) | 0.004643 / 0.011008 (-0.006365) | 0.076755 / 0.038508 (0.038247) | 0.027893 / 0.023109 (0.004783) | 0.342539 / 0.275898 (0.066641) | 0.379103 / 0.323480 (0.055623) | 0.005107 / 0.007986 (-0.002879) | 0.003413 / 0.004328 (-0.000915) | 0.075779 / 0.004250 (0.071528) | 0.039251 / 0.037052 (0.002199) | 0.343425 / 0.258489 (0.084935) | 0.385292 / 0.293841 (0.091451) | 0.032229 / 0.128546 (-0.096317) | 0.011666 / 0.075646 (-0.063980) | 0.086452 / 0.419271 (-0.332819) | 0.042918 / 0.043533 (-0.000615) | 0.343145 / 0.255139 (0.088006) | 0.367916 / 0.283200 (0.084717) | 0.090810 / 0.141683 (-0.050873) | 1.471679 / 1.452155 (0.019524) | 1.566683 / 1.492716 (0.073966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220343 / 0.018006 (0.202336) | 0.396155 / 0.000490 (0.395665) | 0.003831 / 0.000200 (0.003631) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024990 / 0.037411 (-0.012421) | 0.101270 / 0.014526 (0.086744) | 0.110115 / 0.176557 (-0.066442) | 0.161770 / 0.737135 (-0.575365) | 0.112187 / 0.296338 (-0.184151) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436199 / 0.215209 (0.220989) | 4.329084 / 2.077655 (2.251429) | 2.043335 / 1.504120 (0.539215) | 1.836799 / 1.541195 (0.295604) | 1.908362 / 1.468490 (0.439872) | 0.700518 / 4.584777 (-3.884259) | 3.418003 / 3.745712 (-0.327710) | 1.860621 / 5.269862 (-3.409241) | 1.171343 / 4.565676 (-3.394334) | 0.083150 / 0.424275 (-0.341125) | 0.012543 / 0.007607 (0.004936) | 0.533528 / 0.226044 (0.307483) | 5.339660 / 2.268929 (3.070732) | 2.499494 / 55.444624 (-52.945131) | 2.154773 / 6.876477 (-4.721704) | 2.198734 / 2.142072 (0.056661) | 0.803383 / 4.805227 (-4.001844) | 0.150980 / 6.500664 (-6.349684) | 0.068050 / 0.075469 (-0.007419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.309487 / 1.841788 (-0.532301) | 14.177068 / 8.074308 (6.102760) | 13.218912 / 10.191392 (3.027520) | 0.156857 / 0.680424 (-0.523567) | 0.016534 / 0.534201 (-0.517667) | 0.383986 / 0.579283 (-0.195297) | 0.395264 / 0.434364 (-0.039100) | 0.442310 / 0.540337 (-0.098027) | 0.535535 / 1.386936 (-0.851401) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#64e24bca88be711f4fdcb9c18edaddc1db0bbe2e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009446 / 0.011353 (-0.001907) | 0.005061 / 0.011008 (-0.005948) | 0.099783 / 0.038508 (0.061275) | 0.036379 / 0.023109 (0.013270) | 0.296769 / 0.275898 (0.020871) | 0.368990 / 0.323480 (0.045510) | 0.007891 / 0.007986 (-0.000094) | 0.003940 / 0.004328 (-0.000389) | 0.076284 / 0.004250 (0.072034) | 0.044390 / 0.037052 (0.007337) | 0.313373 / 0.258489 (0.054884) | 0.361118 / 0.293841 (0.067277) | 0.039058 / 0.128546 (-0.089488) | 0.012016 / 0.075646 (-0.063631) | 0.334239 / 0.419271 (-0.085033) | 0.047028 / 0.043533 (0.003495) | 0.297766 / 0.255139 (0.042627) | 0.312853 / 0.283200 (0.029653) | 0.099117 / 0.141683 (-0.042566) | 1.475487 / 1.452155 (0.023332) | 1.557487 / 1.492716 (0.064771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206243 / 0.018006 (0.188237) | 0.443920 / 0.000490 (0.443430) | 0.001404 / 0.000200 (0.001205) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026347 / 0.037411 (-0.011065) | 0.105880 / 0.014526 (0.091354) | 0.116227 / 0.176557 (-0.060330) | 0.157404 / 0.737135 (-0.579732) | 0.121668 / 0.296338 (-0.174671) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398614 / 0.215209 (0.183405) | 3.970657 / 2.077655 (1.893002) | 1.778899 / 1.504120 (0.274779) | 1.591806 / 1.541195 (0.050611) | 1.687717 / 1.468490 (0.219227) | 0.695399 / 4.584777 (-3.889378) | 3.829281 / 3.745712 (0.083569) | 2.140856 / 5.269862 (-3.129006) | 1.355027 / 4.565676 (-3.210650) | 0.085714 / 0.424275 (-0.338561) | 0.012130 / 0.007607 (0.004523) | 0.505807 / 0.226044 (0.279762) | 5.053098 / 2.268929 (2.784170) | 2.321694 / 55.444624 (-53.122931) | 2.015909 / 6.876477 (-4.860568) | 2.100862 / 2.142072 (-0.041210) | 0.855689 / 4.805227 (-3.949539) | 0.167192 / 6.500664 (-6.333472) | 0.062376 / 0.075469 (-0.013093) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196647 / 1.841788 (-0.645141) | 14.971356 / 8.074308 (6.897048) | 13.897184 / 10.191392 (3.705792) | 0.193267 / 0.680424 (-0.487157) | 0.029252 / 0.534201 (-0.504949) | 0.444885 / 0.579283 (-0.134398) | 0.452792 / 0.434364 (0.018429) | 0.550157 / 0.540337 (0.009819) | 0.658524 / 1.386936 (-0.728412) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007774 / 0.011353 (-0.003579) | 0.005304 / 0.011008 (-0.005704) | 0.075530 / 0.038508 (0.037022) | 0.034930 / 0.023109 (0.011821) | 0.343879 / 0.275898 (0.067981) | 0.386487 / 0.323480 (0.063008) | 0.005998 / 0.007986 (-0.001987) | 0.005619 / 0.004328 (0.001291) | 0.075865 / 0.004250 (0.071614) | 0.050499 / 0.037052 (0.013446) | 0.345503 / 0.258489 (0.087014) | 0.392081 / 0.293841 (0.098240) | 0.037118 / 0.128546 (-0.091429) | 0.012540 / 0.075646 (-0.063107) | 0.086202 / 0.419271 (-0.333069) | 0.050672 / 0.043533 (0.007139) | 0.343622 / 0.255139 (0.088483) | 0.353853 / 0.283200 (0.070653) | 0.105408 / 0.141683 (-0.036274) | 1.460695 / 1.452155 (0.008540) | 1.524270 / 1.492716 (0.031554) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219356 / 0.018006 (0.201350) | 0.440740 / 0.000490 (0.440251) | 0.014313 / 0.000200 (0.014114) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030297 / 0.037411 (-0.007115) | 0.108723 / 0.014526 (0.094197) | 0.125085 / 0.176557 (-0.051471) | 0.176664 / 0.737135 (-0.560471) | 0.126659 / 0.296338 (-0.169680) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445790 / 0.215209 (0.230581) | 4.241046 / 2.077655 (2.163391) | 2.027381 / 1.504120 (0.523261) | 1.821070 / 1.541195 (0.279876) | 1.934417 / 1.468490 (0.465927) | 0.710897 / 4.584777 (-3.873880) | 3.840397 / 3.745712 (0.094685) | 3.959196 / 5.269862 (-1.310666) | 1.646069 / 4.565676 (-2.919608) | 0.088615 / 0.424275 (-0.335660) | 0.012321 / 0.007607 (0.004714) | 0.523463 / 0.226044 (0.297418) | 5.240147 / 2.268929 (2.971218) | 2.521639 / 55.444624 (-52.922986) | 2.246535 / 6.876477 (-4.629942) | 2.365913 / 2.142072 (0.223841) | 0.851288 / 4.805227 (-3.953939) | 0.170179 / 6.500664 (-6.330485) | 0.064732 / 0.075469 (-0.010737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255505 / 1.841788 (-0.586283) | 15.305457 / 8.074308 (7.231148) | 13.214186 / 10.191392 (3.022794) | 0.188971 / 0.680424 (-0.491453) | 0.018972 / 0.534201 (-0.515229) | 0.429621 / 0.579283 (-0.149662) | 0.428738 / 0.434364 (-0.005626) | 0.536241 / 0.540337 (-0.004096) | 0.632998 / 1.386936 (-0.753938) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b64fae9509f6e9da9cabf0ce677966598fc61e38 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008435 / 0.011353 (-0.002918) | 0.004454 / 0.011008 (-0.006554) | 0.099091 / 0.038508 (0.060583) | 0.028890 / 0.023109 (0.005781) | 0.297450 / 0.275898 (0.021551) | 0.329025 / 0.323480 (0.005545) | 0.006584 / 0.007986 (-0.001401) | 0.004669 / 0.004328 (0.000340) | 0.077387 / 0.004250 (0.073137) | 0.033701 / 0.037052 (-0.003352) | 0.301272 / 0.258489 (0.042783) | 0.345401 / 0.293841 (0.051560) | 0.033473 / 0.128546 (-0.095073) | 0.011244 / 0.075646 (-0.064402) | 0.321941 / 0.419271 (-0.097330) | 0.040646 / 0.043533 (-0.002887) | 0.306686 / 0.255139 (0.051547) | 0.321868 / 0.283200 (0.038668) | 0.084281 / 0.141683 (-0.057401) | 1.491414 / 1.452155 (0.039259) | 1.542799 / 1.492716 (0.050083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188368 / 0.018006 (0.170362) | 0.398595 / 0.000490 (0.398105) | 0.000805 / 0.000200 (0.000605) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022690 / 0.037411 (-0.014721) | 0.096795 / 0.014526 (0.082269) | 0.104037 / 0.176557 (-0.072520) | 0.149409 / 0.737135 (-0.587727) | 0.108022 / 0.296338 (-0.188317) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419316 / 0.215209 (0.204107) | 4.186850 / 2.077655 (2.109196) | 1.920182 / 1.504120 (0.416062) | 1.715493 / 1.541195 (0.174298) | 1.757767 / 1.468490 (0.289277) | 0.692296 / 4.584777 (-3.892480) | 3.342330 / 3.745712 (-0.403382) | 1.842063 / 5.269862 (-3.427798) | 1.150190 / 4.565676 (-3.415487) | 0.082792 / 0.424275 (-0.341483) | 0.012540 / 0.007607 (0.004933) | 0.528867 / 0.226044 (0.302822) | 5.297818 / 2.268929 (3.028890) | 2.313173 / 55.444624 (-53.131451) | 1.941723 / 6.876477 (-4.934754) | 1.982948 / 2.142072 (-0.159125) | 0.808951 / 4.805227 (-3.996276) | 0.149338 / 6.500664 (-6.351326) | 0.064838 / 0.075469 (-0.010631) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187865 / 1.841788 (-0.653923) | 13.381918 / 8.074308 (5.307610) | 13.730627 / 10.191392 (3.539234) | 0.149976 / 0.680424 (-0.530447) | 0.028249 / 0.534201 (-0.505952) | 0.392591 / 0.579283 (-0.186692) | 0.403451 / 0.434364 (-0.030912) | 0.467484 / 0.540337 (-0.072853) | 0.560296 / 1.386936 (-0.826640) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006440 / 0.011353 (-0.004913) | 0.004488 / 0.011008 (-0.006521) | 0.077875 / 0.038508 (0.039367) | 0.027284 / 0.023109 (0.004174) | 0.341625 / 0.275898 (0.065727) | 0.374960 / 0.323480 (0.051480) | 0.005581 / 0.007986 (-0.002405) | 0.003326 / 0.004328 (-0.001003) | 0.076928 / 0.004250 (0.072677) | 0.038205 / 0.037052 (0.001153) | 0.345933 / 0.258489 (0.087444) | 0.383675 / 0.293841 (0.089834) | 0.031908 / 0.128546 (-0.096638) | 0.011724 / 0.075646 (-0.063922) | 0.086974 / 0.419271 (-0.332298) | 0.043084 / 0.043533 (-0.000449) | 0.339663 / 0.255139 (0.084524) | 0.363782 / 0.283200 (0.080582) | 0.090934 / 0.141683 (-0.050749) | 1.459718 / 1.452155 (0.007563) | 1.541104 / 1.492716 (0.048388) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224005 / 0.018006 (0.205998) | 0.400727 / 0.000490 (0.400238) | 0.000427 / 0.000200 (0.000227) | 0.000061 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024604 / 0.037411 (-0.012807) | 0.099813 / 0.014526 (0.085287) | 0.104034 / 0.176557 (-0.072523) | 0.156245 / 0.737135 (-0.580890) | 0.108739 / 0.296338 (-0.187600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440500 / 0.215209 (0.225291) | 4.379934 / 2.077655 (2.302279) | 2.075826 / 1.504120 (0.571706) | 1.867635 / 1.541195 (0.326441) | 1.919035 / 1.468490 (0.450545) | 0.696613 / 4.584777 (-3.888164) | 3.334993 / 3.745712 (-0.410720) | 1.857139 / 5.269862 (-3.412723) | 1.160598 / 4.565676 (-3.405079) | 0.083120 / 0.424275 (-0.341155) | 0.012475 / 0.007607 (0.004868) | 0.544607 / 0.226044 (0.318563) | 5.436808 / 2.268929 (3.167879) | 2.518562 / 55.444624 (-52.926063) | 2.158434 / 6.876477 (-4.718042) | 2.170691 / 2.142072 (0.028618) | 0.811297 / 4.805227 (-3.993930) | 0.150675 / 6.500664 (-6.349990) | 0.065655 / 0.075469 (-0.009814) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277627 / 1.841788 (-0.564160) | 13.833501 / 8.074308 (5.759193) | 13.038718 / 10.191392 (2.847325) | 0.148837 / 0.680424 (-0.531587) | 0.016440 / 0.534201 (-0.517761) | 0.379147 / 0.579283 (-0.200136) | 0.379753 / 0.434364 (-0.054611) | 0.460197 / 0.540337 (-0.080141) | 0.544152 / 1.386936 (-0.842784) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6e2a235cbab1c91dc5eca0cb123f9c9d9f743461 \"CML watermark\")\n" ]
1,678,123,689,000
1,678,195,670,000
1,678,195,257,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5614/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5614/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5614", "html_url": "https://github.com/huggingface/datasets/pull/5614", "diff_url": "https://github.com/huggingface/datasets/pull/5614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5614.patch", "merged_at": "2023-03-07T13:20:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5613/comments
https://api.github.com/repos/huggingface/datasets/issues/5613/events
https://github.com/huggingface/datasets/issues/5613
1,611,875,473
I_kwDODunzps5gE0SR
5,613
Version mismatch with multiprocess and dill on Python 3.10
{ "login": "adampauls", "id": 1243668, "node_id": "MDQ6VXNlcjEyNDM2Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adampauls", "html_url": "https://github.com/adampauls", "followers_url": "https://api.github.com/users/adampauls/followers", "following_url": "https://api.github.com/users/adampauls/following{/other_user}", "gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}", "starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adampauls/subscriptions", "organizations_url": "https://api.github.com/users/adampauls/orgs", "repos_url": "https://api.github.com/users/adampauls/repos", "events_url": "https://api.github.com/users/adampauls/events{/privacy}", "received_events_url": "https://api.github.com/users/adampauls/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Sorry, I just found https://github.com/apache/beam/issues/24458. It seems this issue is being worked on. ", "Reopening, since I think the docs should inform the user of this problem. For example, [this page](https://huggingface.co/docs/datasets/installation) says \r\n> Datasets is tested on Python 3.7+.\r\n\r\nbut it should probably say that Beam Datasets do not work with Python 3.10 (or link to a known issues page). ", "Same problem on Colab using a vanilla setup running :\r\nPython 3.10.11 \r\napache-beam 2.47.0\r\ndatasets 2.12.0", "Same problem, \r\npy 3.10.11\r\napache-beam==2.47.0\r\ndatasets==2.12.0" ]
1,678,122,881,000
1,685,235,835,000
null
NONE
null
### Describe the bug Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is ``` File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module> import datasets File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/__init__.py", line 43, in <module> from .arrow_dataset import Dataset File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 65, in <module> from .arrow_reader import ArrowReader File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 30, in <module> from .download.download_config import DownloadConfig File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/__init__.py", line 9, in <module> from .download_manager import DownloadManager, DownloadMode File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/download_manager.py", line 35, in <module> from ..utils.py_utils import NestedDataStructure, map_nested, size_str File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 40, in <module> import multiprocess.pool File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 609, in <module> class ThreadPool(Pool): File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 611, in ThreadPool from .dummy import Process File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/dummy/__init__.py", line 87, in <module> class Condition(threading._Condition): AttributeError: module 'threading' has no attribute '_Condition'. Did you mean: 'Condition'? ``` I think this is a bad interaction of versions from `dill`, `multiprocess`, `apache-beam`, and `threading` from the Python (3.10) standard lib. Upgrading `multiprocess` to a version that does not crash like this is not possible because `apache-beam` pins `dill` to and old version: ``` Because multiprocess (0.70.10) depends on dill (>=0.3.2) and apache-beam (2.45.0) depends on dill (>=0.3.1.1,<0.3.2), multiprocess (0.70.10) is incompatible with apache-beam (2.45.0). And because no versions of apache-beam match >2.45.0,<3.0.0, multiprocess (0.70.10) is incompatible with apache-beam (>=2.45.0,<3.0.0). So, because yyy depends on both apache-beam (^2.45.0) and multiprocess (0.70.10), version solving failed. ``` Perhaps it is not right to file a bug here, but I'm not totally sure whose fault it is. And in any case, this is an immediate blocker to using `datasets` out of the box. Possibly related to https://github.com/huggingface/datasets/issues/5232. ### Steps to reproduce the bug Steps to reproduce: 1. Make a poetry project with this configuration ``` [tool.poetry] name = "yyy" version = "0.1.0" description = "" authors = ["Adam Pauls <[email protected]>"] readme = "README.md" packages = [{ include = "xxx" }] [tool.poetry.dependencies] python = ">=3.10,<3.11" datasets = "^2.10.1" apache-beam = "^2.45.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" ``` 2. `poetry install`. 3. `poetry run python -c "import datasets"`. ### Expected behavior Script runs. ### Environment info Python 3.10. Here are the versions installed by `poetry`: ``` β€’β€’ Installing frozenlist (1.3.3) β€’ Installing idna (3.4) β€’ Installing multidict (6.0.4) β€’ Installing aiosignal (1.3.1) β€’ Installing async-timeout (4.0.2) β€’ Installing attrs (22.2.0) β€’ Installing certifi (2022.12.7) β€’ Installing charset-normalizer (3.1.0) β€’ Installing six (1.16.0) β€’ Installing urllib3 (1.26.14) β€’ Installing yarl (1.8.2) β€’ Installing aiohttp (3.8.4) β€’ Installing dill (0.3.1.1) β€’ Installing docopt (0.6.2) β€’ Installing filelock (3.9.0) β€’ Installing numpy (1.22.4) β€’ Installing pyparsing (3.0.9) β€’ Installing protobuf (3.19.4) β€’ Installing packaging (23.0) β€’ Installing python-dateutil (2.8.2) β€’ Installing pytz (2022.7.1) β€’ Installing pyyaml (6.0) β€’ Installing requests (2.28.2) β€’ Installing tqdm (4.65.0) β€’ Installing typing-extensions (4.5.0) β€’ Installing cloudpickle (2.2.1) β€’ Installing crcmod (1.7) β€’ Installing fastavro (1.7.2) β€’ Installing fasteners (0.18) β€’ Installing fsspec (2023.3.0) β€’ Installing grpcio (1.51.3) β€’ Installing hdfs (2.7.0) β€’ Installing httplib2 (0.20.4) β€’ Installing huggingface-hub (0.12.1) β€’ Installing multiprocess (0.70.9) β€’ Installing objsize (0.6.1) β€’ Installing orjson (3.8.7) β€’ Installing pandas (1.5.3) β€’ Installing proto-plus (1.22.2) β€’ Installing pyarrow (9.0.0) β€’ Installing pydot (1.4.2) β€’ Installing pymongo (3.13.0) β€’ Installing regex (2022.10.31) β€’ Installing responses (0.18.0) β€’ Installing xxhash (3.2.0) β€’ Installing zstandard (0.20.0) β€’ Installing apache-beam (2.45.0) β€’ Installing datasets (2.10.1) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5613/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5613/timeline
null
reopened
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5612/comments
https://api.github.com/repos/huggingface/datasets/issues/5612/events
https://github.com/huggingface/datasets/issues/5612
1,611,262,510
I_kwDODunzps5gCeou
5,612
Arrow map type in parquet files unsupported
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I'm attaching a minimal reproducible example:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\ntable_with_map = pa.Table.from_pydict(\r\n {\"a\": [1, 2], \"b\": [[(\"a\", 2)], [(\"b\", 4)]]},\r\n schema=pa.schema({\"a\": pa.int32(), \"b\": pa.map_(pa.string(), pa.int32())})\r\n)\r\npq.write_table(table_with_map, \"parquet_with_map.parquet\")\r\ndset = load_dataset(\"parquet\", data_files=\"parquet_with_map.parquet\", split=\"train\") # error unless streaming=True\r\n``` \r\n\r\nFor a dataset generated with the packaged loaders (CSV, JSON, Parquet), `streaming=True` sets the dataset's features to `None` (unless explicitly provided in `load_dataset`), hence no error will be thrown as long as the features stay \"unresolved\" (resolving the features with `_resolve_features` will lead to an error)." ]
1,678,104,204,000
1,678,814,425,000
null
CONTRIBUTOR
null
### Describe the bug When I try to load parquet files that were processed with Spark, I get the following issue: `ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.` Strangely, loading the dataset with `streaming=True` solves the issue. ### Steps to reproduce the bug The dataset is private, but this can be reproduced with any dataset that has Arrow maps. ### Expected behavior Loading the dataset no matter whether streaming is True or not. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.15.0-1029-gcp-x86_64-with-glibc2.31 - Python version: 3.10.7 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5612/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5611/comments
https://api.github.com/repos/huggingface/datasets/issues/5611/events
https://github.com/huggingface/datasets/pull/5611
1,611,197,906
PR_kwDODunzps5LW2Lx
5,611
add Dataset.to_list
{ "login": "kyoto7250", "id": 50972773, "node_id": "MDQ6VXNlcjUwOTcyNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoto7250", "html_url": "https://github.com/kyoto7250", "followers_url": "https://api.github.com/users/kyoto7250/followers", "following_url": "https://api.github.com/users/kyoto7250/following{/other_user}", "gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions", "organizations_url": "https://api.github.com/users/kyoto7250/orgs", "repos_url": "https://api.github.com/users/kyoto7250/repos", "events_url": "https://api.github.com/users/kyoto7250/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoto7250/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi, thanks for working on this! `Table.to_pylist` requires PyArrow 7.0+, and our minimal version requirement is 6.0, so we need to bump the version requirement to avoid CI failure. I'll do this in a separate PR.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006857 / 0.011353 (-0.004496) | 0.004711 / 0.011008 (-0.006297) | 0.098332 / 0.038508 (0.059824) | 0.028547 / 0.023109 (0.005438) | 0.307647 / 0.275898 (0.031749) | 0.334891 / 0.323480 (0.011411) | 0.005252 / 0.007986 (-0.002734) | 0.003495 / 0.004328 (-0.000833) | 0.075529 / 0.004250 (0.071279) | 0.042167 / 0.037052 (0.005114) | 0.308509 / 0.258489 (0.050020) | 0.348294 / 0.293841 (0.054453) | 0.032042 / 0.128546 (-0.096504) | 0.011684 / 0.075646 (-0.063962) | 0.321740 / 0.419271 (-0.097531) | 0.057725 / 0.043533 (0.014193) | 0.309431 / 0.255139 (0.054292) | 0.326818 / 0.283200 (0.043618) | 0.093261 / 0.141683 (-0.048422) | 1.475344 / 1.452155 (0.023190) | 1.563952 / 1.492716 (0.071236) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205056 / 0.018006 (0.187050) | 0.421656 / 0.000490 (0.421166) | 0.004167 / 0.000200 (0.003967) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023935 / 0.037411 (-0.013476) | 0.097220 / 0.014526 (0.082695) | 0.104942 / 0.176557 (-0.071615) | 0.170339 / 0.737135 (-0.566796) | 0.107556 / 0.296338 (-0.188782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424509 / 0.215209 (0.209300) | 4.223637 / 2.077655 (2.145982) | 2.090700 / 1.504120 (0.586580) | 1.902537 / 1.541195 (0.361343) | 1.981192 / 1.468490 (0.512701) | 0.695272 / 4.584777 (-3.889505) | 3.570169 / 3.745712 (-0.175544) | 1.885007 / 5.269862 (-3.384854) | 1.162828 / 4.565676 (-3.402848) | 0.084956 / 0.424275 (-0.339319) | 0.012818 / 0.007607 (0.005210) | 0.534395 / 0.226044 (0.308351) | 5.354318 / 2.268929 (3.085389) | 2.436875 / 55.444624 (-53.007749) | 2.111365 / 6.876477 (-4.765112) | 2.232874 / 2.142072 (0.090802) | 0.804703 / 4.805227 (-4.000524) | 0.152406 / 6.500664 (-6.348258) | 0.066926 / 0.075469 (-0.008543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198621 / 1.841788 (-0.643166) | 13.907491 / 8.074308 (5.833183) | 14.356286 / 10.191392 (4.164894) | 0.140714 / 0.680424 (-0.539710) | 0.016440 / 0.534201 (-0.517761) | 0.380868 / 0.579283 (-0.198415) | 0.396004 / 0.434364 (-0.038360) | 0.448275 / 0.540337 (-0.092062) | 0.537818 / 1.386936 (-0.849118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004652 / 0.011008 (-0.006356) | 0.076449 / 0.038508 (0.037941) | 0.028389 / 0.023109 (0.005280) | 0.378644 / 0.275898 (0.102746) | 0.423870 / 0.323480 (0.100391) | 0.005824 / 0.007986 (-0.002162) | 0.003398 / 0.004328 (-0.000931) | 0.075575 / 0.004250 (0.071324) | 0.039656 / 0.037052 (0.002604) | 0.370072 / 0.258489 (0.111583) | 0.441812 / 0.293841 (0.147971) | 0.031817 / 0.128546 (-0.096729) | 0.011701 / 0.075646 (-0.063946) | 0.085759 / 0.419271 (-0.333513) | 0.042328 / 0.043533 (-0.001205) | 0.364103 / 0.255139 (0.108964) | 0.413910 / 0.283200 (0.130711) | 0.090871 / 0.141683 (-0.050812) | 1.505749 / 1.452155 (0.053594) | 1.608555 / 1.492716 (0.115839) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212533 / 0.018006 (0.194527) | 0.404519 / 0.000490 (0.404030) | 0.000373 / 0.000200 (0.000174) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024849 / 0.037411 (-0.012562) | 0.100769 / 0.014526 (0.086243) | 0.110450 / 0.176557 (-0.066107) | 0.161715 / 0.737135 (-0.575420) | 0.113599 / 0.296338 (-0.182739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436780 / 0.215209 (0.221571) | 4.387103 / 2.077655 (2.309448) | 2.081942 / 1.504120 (0.577822) | 1.873661 / 1.541195 (0.332466) | 1.947718 / 1.468490 (0.479228) | 0.696434 / 4.584777 (-3.888343) | 3.405300 / 3.745712 (-0.340412) | 1.897388 / 5.269862 (-3.372474) | 1.169969 / 4.565676 (-3.395707) | 0.083085 / 0.424275 (-0.341190) | 0.012480 / 0.007607 (0.004873) | 0.535635 / 0.226044 (0.309591) | 5.364462 / 2.268929 (3.095533) | 2.531168 / 55.444624 (-52.913457) | 2.184324 / 6.876477 (-4.692153) | 2.228613 / 2.142072 (0.086541) | 0.807127 / 4.805227 (-3.998100) | 0.151971 / 6.500664 (-6.348693) | 0.068430 / 0.075469 (-0.007039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306401 / 1.841788 (-0.535387) | 14.479552 / 8.074308 (6.405244) | 14.428398 / 10.191392 (4.237006) | 0.159505 / 0.680424 (-0.520919) | 0.016856 / 0.534201 (-0.517344) | 0.375197 / 0.579283 (-0.204086) | 0.384328 / 0.434364 (-0.050036) | 0.440688 / 0.540337 (-0.099650) | 0.524998 / 1.386936 (-0.861938) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#50b887b840cf3cab86b0394b41050b579c4b79ba \"CML watermark\")\n" ]
1,678,101,717,000
1,679,924,059,000
1,679,923,598,000
CONTRIBUTOR
null
close https://github.com/huggingface/datasets/issues/5606 This PR is for adding the `Dataset.to_list` method. Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5611/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5611", "html_url": "https://github.com/huggingface/datasets/pull/5611", "diff_url": "https://github.com/huggingface/datasets/pull/5611.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5611.patch", "merged_at": "2023-03-27T13:26:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/5610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5610/comments
https://api.github.com/repos/huggingface/datasets/issues/5610/events
https://github.com/huggingface/datasets/issues/5610
1,610,698,006
I_kwDODunzps5gAU0W
5,610
use datasets streaming mode in trainer ddp mode cause memory leak
{ "login": "gromzhu", "id": 15223544, "node_id": "MDQ6VXNlcjE1MjIzNTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/15223544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gromzhu", "html_url": "https://github.com/gromzhu", "followers_url": "https://api.github.com/users/gromzhu/followers", "following_url": "https://api.github.com/users/gromzhu/following{/other_user}", "gists_url": "https://api.github.com/users/gromzhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/gromzhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gromzhu/subscriptions", "organizations_url": "https://api.github.com/users/gromzhu/orgs", "repos_url": "https://api.github.com/users/gromzhu/repos", "events_url": "https://api.github.com/users/gromzhu/events{/privacy}", "received_events_url": "https://api.github.com/users/gromzhu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Same problem, \r\ntransformers 4.28.1\r\ndatasets 2.12.0\r\n\r\nleak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(\r\n", "found an article described a problem, may be helpful for somebody:\r\nhttps://ppwwyyxx.com/blog/2022/Demystify-RAM-Usage-in-Multiprocess-DataLoader/\r\nI confirm, it`s not memory leak, after some time memory growing has stopped" ]
1,678,080,409,000
1,683,472,532,000
null
NONE
null
### Describe the bug use datasets streaming mode in trainer ddp mode cause memory leak ### Steps to reproduce the bug import os import time import datetime import sys import numpy as np import random import torch from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler torch.manual_seed(42) from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM from transformers import AdamW, get_linear_schedule_with_warmup hf_model_path ='./Wenzhong-GPT2-110M' tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path) tokenizer.add_special_tokens({'pad_token': '<|pad|>'}) from datasets import load_dataset gpus=8 max_len = 576 batch_size_node = 17 save_step = 5000 gradient_accumulation = 2 dataloader_num = 4 max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus #max_step = -1 print("total_step:%d"%(max_step)) import datasets datasets.version dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True) print('load over') shuffled_dataset = dataset.shuffle(seed=42) print('shuffle over') def dataset_tokener(example,max_lenth=max_len): example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] )) return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest") new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"]) print('map over') configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False) model = AutoModelForCausalLM.from_pretrained(hf_model_path) model.resize_token_embeddings(len(tokenizer)) seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) from transformers import Trainer,TrainingArguments import os print("strat train") training_args = TrainingArguments(output_dir="./test_trainer", num_train_epochs=1.0, report_to="none", do_train=True, dataloader_num_workers=dataloader_num, local_rank=int(os.environ.get('LOCAL_RANK', -1)), overwrite_output_dir=True, logging_strategy='steps', logging_first_step=True, logging_dir="./logs", log_on_each_node=False, per_device_train_batch_size=batch_size_node, warmup_ratio=0.03, save_steps=save_step, save_total_limit=5, gradient_accumulation_steps=gradient_accumulation, max_steps=max_step, disable_tqdm=False, data_seed=42 ) trainer = Trainer( model=model, args=training_args, train_dataset=new_new_dataset, eval_dataset=None, tokenizer=tokenizer, data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False), #compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None, #preprocess_logits_for_metrics=preprocess_logits_for_metrics #if training_args.do_eval and not is_torch_tpu_available() #else None, ) trainer.train(resume_from_checkpoint=True) ### Expected behavior use the train code uppper my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py here is result: ![image](https://user-images.githubusercontent.com/15223544/223026042-1a81489f-897a-43e4-8339-65a202fd5dc7.png) here is memory usage monitor in 12 hours ![image](https://user-images.githubusercontent.com/15223544/223027076-14e32e8b-9608-4282-9a80-f15d0277026d.png) every dataloader work allocate over 24gb cpu memory according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase. i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak. ### Environment info pytorch 1.11.0 py 3.8 cuda 11.3 transformers 4.26.1 datasets 2.9.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5610/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5609/comments
https://api.github.com/repos/huggingface/datasets/issues/5609/events
https://github.com/huggingface/datasets/issues/5609
1,610,062,862
I_kwDODunzps5f95wO
5,609
`load_from_disk` vs `load_dataset` performance.
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".", "Great to hear! I'll give it a try when I've got a moment." ]
1,677,994,035,000
1,678,822,313,000
null
NONE
null
### Describe the bug I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices: 1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering. 2. `save_to_disk` and then use `load_from_disk` to load the filtered version. The performance of these two approaches is wildly different: * Using `load_dataset` takes about 20 seconds to load the dataset, and a few seconds to re-filter (thanks to the brilliant filter/map caching) * Using `load_from_disk` takes 14 minutes! And the second time I tried, the session just crashed (on a machine with 32GB of RAM) I don't know if you'd call this a bug, but it seems like there shouldn't need to be two methods to load from disk, or that they should not take such wildly different amounts of time, or that one should not crash. Or maybe that the docs could offer some guidance about when to pick which method and why two methods exist, or just how do most people do it? Something I couldn't work out from reading the docs was this: can I modify a dataset from the hub, save it (locally) and use `load_dataset` to load it? This [post seemed to suggest that the answer is no](https://discuss.huggingface.co/t/save-and-load-datasets/9260). ### Steps to reproduce the bug See above ### Expected behavior Load times should be about the same. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5609/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5608/comments
https://api.github.com/repos/huggingface/datasets/issues/5608/events
https://github.com/huggingface/datasets/issues/5608
1,609,996,563
I_kwDODunzps5f9pkT
5,608
audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files.
{ "login": "jcho19", "id": 107211437, "node_id": "U_kgDOBmPqrQ", "avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcho19", "html_url": "https://github.com/jcho19", "followers_url": "https://api.github.com/users/jcho19/followers", "following_url": "https://api.github.com/users/jcho19/following{/other_user}", "gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcho19/subscriptions", "organizations_url": "https://api.github.com/users/jcho19/orgs", "repos_url": "https://api.github.com/users/jcho19/repos", "events_url": "https://api.github.com/users/jcho19/events{/privacy}", "received_events_url": "https://api.github.com/users/jcho19/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi!\r\n\r\n> naming convention of mp3 files\r\n\r\nYes, this could be the problem. MP3 files should end with `.mp3`/`.MP3` to be recognized as audio files.\r\n\r\nIf the file names are not the culprit, can you paste the audio folder's directory structure to help us reproduce the error (e.g., by running the `tree \"x\"` command)?", "Hi! I'm sorry, I don't want to reveal my entire dataset, but here's a snippet (all of the mp3 files below are some of the ones not being recognized by audiofolder. Also, for another dataset, audiofolder loaded zero mp3 files because \"train\" was in the name of one of the mp3 files. \r\nmy_dataset\r\nβ”œβ”€β”€ data\r\nβ”‚Β Β  β”œβ”€β”€ VHA_Innovation_Stories_-_Day_2-123.mp3\r\nβ”‚Β Β  β”œβ”€β”€ VHA_Innovation_Stories_-_Day_2-124.mp3\r\nβ”‚Β Β  β”œβ”€β”€ ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-93.mp3\r\nβ”‚Β Β  β”œβ”€β”€ ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-94.mp3\r\nβ”‚Β Β  β”œβ”€β”€ ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-95.mp3\r\nβ”‚Β Β  β”œβ”€β”€ Your_Impact\\357\\274\\232_Neurosurgery_equipment-5.mp3\r\nβ”‚Β Β  └── Your_Impact\\357\\274\\232_Neurosurgery_equipment-6.mp3\r\n└── metadata.csv\r\n\r\nHere's a few of the 13 files recognized by the dataset:\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-1.mp3\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-2.mp3\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-3.mp3\r\nIVP_β§Έ_IVU_test_Procedure_for_Kidneys_intravenous_pyelogram_-_medical_radiology_X-ray_ivp-1.mp3\r\nIVP_β§Έ_IVU_test_Procedure_for_Kidneys_intravenous_pyelogram_-_medical_radiology_X-ray_ivp-2.mp3" ]
1,677,975,285,000
1,678,579,377,000
1,678,579,377,000
NONE
null
### Describe the bug x = load_dataset("audiofolder", data_dir="x") When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.) ### Steps to reproduce the bug x = load_dataset("audiofolder", data_dir="x") ### Expected behavior x = load_dataset("audiofolder", data_dir="x") should create a dataset of 20,000 rows (files). ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5608/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5607/comments
https://api.github.com/repos/huggingface/datasets/issues/5607/events
https://github.com/huggingface/datasets/pull/5607
1,609,166,035
PR_kwDODunzps5LQPbG
5,607
Fix outdated `verification_mode` values
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006142 / 0.011353 (-0.005211) | 0.004506 / 0.011008 (-0.006502) | 0.100224 / 0.038508 (0.061715) | 0.026988 / 0.023109 (0.003879) | 0.301625 / 0.275898 (0.025727) | 0.346337 / 0.323480 (0.022857) | 0.004642 / 0.007986 (-0.003343) | 0.003481 / 0.004328 (-0.000847) | 0.075847 / 0.004250 (0.071597) | 0.036959 / 0.037052 (-0.000094) | 0.302697 / 0.258489 (0.044208) | 0.351917 / 0.293841 (0.058076) | 0.030719 / 0.128546 (-0.097828) | 0.011591 / 0.075646 (-0.064056) | 0.319709 / 0.419271 (-0.099563) | 0.042000 / 0.043533 (-0.001532) | 0.306854 / 0.255139 (0.051715) | 0.326903 / 0.283200 (0.043703) | 0.082711 / 0.141683 (-0.058972) | 1.486616 / 1.452155 (0.034461) | 1.603229 / 1.492716 (0.110513) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198990 / 0.018006 (0.180983) | 0.427733 / 0.000490 (0.427243) | 0.003612 / 0.000200 (0.003412) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022932 / 0.037411 (-0.014480) | 0.096969 / 0.014526 (0.082443) | 0.105749 / 0.176557 (-0.070807) | 0.166101 / 0.737135 (-0.571034) | 0.108646 / 0.296338 (-0.187692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428174 / 0.215209 (0.212965) | 4.271452 / 2.077655 (2.193797) | 1.907588 / 1.504120 (0.403468) | 1.680870 / 1.541195 (0.139675) | 1.761336 / 1.468490 (0.292846) | 0.700380 / 4.584777 (-3.884396) | 3.415168 / 3.745712 (-0.330544) | 1.886122 / 5.269862 (-3.383740) | 1.276814 / 4.565676 (-3.288863) | 0.083429 / 0.424275 (-0.340846) | 0.012988 / 0.007607 (0.005381) | 0.518821 / 0.226044 (0.292776) | 5.188284 / 2.268929 (2.919356) | 2.433084 / 55.444624 (-53.011540) | 1.988034 / 6.876477 (-4.888443) | 2.100275 / 2.142072 (-0.041797) | 0.808252 / 4.805227 (-3.996976) | 0.158102 / 6.500664 (-6.342562) | 0.067686 / 0.075469 (-0.007783) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204171 / 1.841788 (-0.637616) | 13.548756 / 8.074308 (5.474448) | 14.339805 / 10.191392 (4.148413) | 0.142853 / 0.680424 (-0.537571) | 0.016529 / 0.534201 (-0.517672) | 0.383800 / 0.579283 (-0.195483) | 0.380362 / 0.434364 (-0.054002) | 0.437716 / 0.540337 (-0.102621) | 0.524306 / 1.386936 (-0.862630) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006730 / 0.011353 (-0.004623) | 0.004652 / 0.011008 (-0.006356) | 0.077476 / 0.038508 (0.038968) | 0.027584 / 0.023109 (0.004475) | 0.340907 / 0.275898 (0.065009) | 0.377950 / 0.323480 (0.054470) | 0.005946 / 0.007986 (-0.002040) | 0.003548 / 0.004328 (-0.000780) | 0.076270 / 0.004250 (0.072019) | 0.037483 / 0.037052 (0.000431) | 0.346390 / 0.258489 (0.087901) | 0.384739 / 0.293841 (0.090898) | 0.031744 / 0.128546 (-0.096802) | 0.011598 / 0.075646 (-0.064049) | 0.085651 / 0.419271 (-0.333620) | 0.047308 / 0.043533 (0.003775) | 0.344704 / 0.255139 (0.089565) | 0.363410 / 0.283200 (0.080211) | 0.095009 / 0.141683 (-0.046674) | 1.478307 / 1.452155 (0.026152) | 1.576808 / 1.492716 (0.084092) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197545 / 0.018006 (0.179539) | 0.431984 / 0.000490 (0.431494) | 0.001529 / 0.000200 (0.001329) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025452 / 0.037411 (-0.011959) | 0.100176 / 0.014526 (0.085651) | 0.108222 / 0.176557 (-0.068335) | 0.160556 / 0.737135 (-0.576580) | 0.112748 / 0.296338 (-0.183591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436326 / 0.215209 (0.221117) | 4.378443 / 2.077655 (2.300788) | 2.056001 / 1.504120 (0.551881) | 1.853406 / 1.541195 (0.312211) | 1.931645 / 1.468490 (0.463155) | 0.698340 / 4.584777 (-3.886437) | 3.368961 / 3.745712 (-0.376751) | 2.583622 / 5.269862 (-2.686239) | 1.501274 / 4.565676 (-3.064402) | 0.083034 / 0.424275 (-0.341241) | 0.012725 / 0.007607 (0.005117) | 0.539991 / 0.226044 (0.313947) | 5.418413 / 2.268929 (3.149485) | 2.517205 / 55.444624 (-52.927420) | 2.179332 / 6.876477 (-4.697144) | 2.215376 / 2.142072 (0.073304) | 0.806133 / 4.805227 (-3.999094) | 0.151499 / 6.500664 (-6.349165) | 0.067270 / 0.075469 (-0.008199) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.308324 / 1.841788 (-0.533464) | 14.357361 / 8.074308 (6.283053) | 14.684768 / 10.191392 (4.493376) | 0.139575 / 0.680424 (-0.540849) | 0.016409 / 0.534201 (-0.517792) | 0.374087 / 0.579283 (-0.205196) | 0.390628 / 0.434364 (-0.043735) | 0.443102 / 0.540337 (-0.097235) | 0.536089 / 1.386936 (-0.850847) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#778d4e1c13ece980e706f8c7cb06e8473fd61315 \"CML watermark\")\n" ]
1,677,873,029,000
1,678,383,253,000
1,678,382,827,000
CONTRIBUTOR
null
~I think it makes sense not to save `dataset_info.json` file to a dataset cache directory when loading dataset with `verification_mode="no_checks"` because otherwise when next time the dataset is loaded **without** `verification_mode="no_checks"`, it will be loaded successfully, despite some values in info might not correspond to the ones in the repo which was the reason for using `verification_mode="no_checks"` first.~ Updated values of `verification_mode` to the current ones in some places ("none" -> "no_checks", "all" -> "all_checks")
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5607/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5607", "html_url": "https://github.com/huggingface/datasets/pull/5607", "diff_url": "https://github.com/huggingface/datasets/pull/5607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5607.patch", "merged_at": "2023-03-09T17:27:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/5606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5606/comments
https://api.github.com/repos/huggingface/datasets/issues/5606/events
https://github.com/huggingface/datasets/issues/5606
1,608,911,632
I_kwDODunzps5f5gsQ
5,606
Add `Dataset.to_list` to the API
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "kyoto7250", "id": 50972773, "node_id": "MDQ6VXNlcjUwOTcyNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoto7250", "html_url": "https://github.com/kyoto7250", "followers_url": "https://api.github.com/users/kyoto7250/followers", "following_url": "https://api.github.com/users/kyoto7250/following{/other_user}", "gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions", "organizations_url": "https://api.github.com/users/kyoto7250/orgs", "repos_url": "https://api.github.com/users/kyoto7250/repos", "events_url": "https://api.github.com/users/kyoto7250/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoto7250/received_events", "type": "User", "site_admin": false }
[ { "login": "kyoto7250", "id": 50972773, "node_id": "MDQ6VXNlcjUwOTcyNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoto7250", "html_url": "https://github.com/kyoto7250", "followers_url": "https://api.github.com/users/kyoto7250/followers", "following_url": "https://api.github.com/users/kyoto7250/following{/other_user}", "gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions", "organizations_url": "https://api.github.com/users/kyoto7250/orgs", "repos_url": "https://api.github.com/users/kyoto7250/repos", "events_url": "https://api.github.com/users/kyoto7250/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoto7250/received_events", "type": "User", "site_admin": false } ]
[ "Hello, I have an interest in this issue.\r\nIs the `Dataset.to_dict` you are describing correct in the code here?\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/arrow_dataset.py#L4633-L4667", "Yes, this is where `Dataset.to_dict` is defined.", "#self-assign" ]
1,677,860,230,000
1,679,923,600,000
1,679,923,600,000
CONTRIBUTOR
null
Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent. Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5606/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5605/comments
https://api.github.com/repos/huggingface/datasets/issues/5605/events
https://github.com/huggingface/datasets/pull/5605
1,608,865,460
PR_kwDODunzps5LPPf5
5,605
Update README logo
{ "login": "gary149", "id": 3841370, "node_id": "MDQ6VXNlcjM4NDEzNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gary149", "html_url": "https://github.com/gary149", "followers_url": "https://api.github.com/users/gary149/followers", "following_url": "https://api.github.com/users/gary149/following{/other_user}", "gists_url": "https://api.github.com/users/gary149/gists{/gist_id}", "starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gary149/subscriptions", "organizations_url": "https://api.github.com/users/gary149/orgs", "repos_url": "https://api.github.com/users/gary149/repos", "events_url": "https://api.github.com/users/gary149/events{/privacy}", "received_events_url": "https://api.github.com/users/gary149/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Are you sure it's safe to remove? https://github.com/huggingface/datasets/pull/3866", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009520 / 0.011353 (-0.001833) | 0.005319 / 0.011008 (-0.005690) | 0.099372 / 0.038508 (0.060863) | 0.036173 / 0.023109 (0.013064) | 0.295752 / 0.275898 (0.019853) | 0.362882 / 0.323480 (0.039402) | 0.008442 / 0.007986 (0.000456) | 0.004225 / 0.004328 (-0.000103) | 0.076645 / 0.004250 (0.072394) | 0.044198 / 0.037052 (0.007146) | 0.311948 / 0.258489 (0.053459) | 0.342963 / 0.293841 (0.049122) | 0.038613 / 0.128546 (-0.089933) | 0.012127 / 0.075646 (-0.063519) | 0.334427 / 0.419271 (-0.084844) | 0.048309 / 0.043533 (0.004776) | 0.297046 / 0.255139 (0.041907) | 0.314562 / 0.283200 (0.031363) | 0.105797 / 0.141683 (-0.035886) | 1.460967 / 1.452155 (0.008812) | 1.500907 / 1.492716 (0.008190) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216185 / 0.018006 (0.198179) | 0.438924 / 0.000490 (0.438435) | 0.001210 / 0.000200 (0.001011) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026193 / 0.037411 (-0.011219) | 0.105888 / 0.014526 (0.091363) | 0.115812 / 0.176557 (-0.060744) | 0.158748 / 0.737135 (-0.578387) | 0.121514 / 0.296338 (-0.174824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399837 / 0.215209 (0.184628) | 3.996992 / 2.077655 (1.919338) | 1.784964 / 1.504120 (0.280844) | 1.591078 / 1.541195 (0.049883) | 1.666424 / 1.468490 (0.197934) | 0.711450 / 4.584777 (-3.873327) | 3.787814 / 3.745712 (0.042102) | 2.056776 / 5.269862 (-3.213085) | 1.332163 / 4.565676 (-3.233514) | 0.085755 / 0.424275 (-0.338520) | 0.012033 / 0.007607 (0.004426) | 0.511500 / 0.226044 (0.285455) | 5.098999 / 2.268929 (2.830071) | 2.288261 / 55.444624 (-53.156364) | 1.947483 / 6.876477 (-4.928994) | 1.987838 / 2.142072 (-0.154234) | 0.852241 / 4.805227 (-3.952986) | 0.164781 / 6.500664 (-6.335883) | 0.061825 / 0.075469 (-0.013644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202253 / 1.841788 (-0.639534) | 14.632608 / 8.074308 (6.558300) | 13.331320 / 10.191392 (3.139928) | 0.157944 / 0.680424 (-0.522480) | 0.029284 / 0.534201 (-0.504917) | 0.446636 / 0.579283 (-0.132647) | 0.437009 / 0.434364 (0.002645) | 0.521883 / 0.540337 (-0.018455) | 0.606687 / 1.386936 (-0.780249) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007528 / 0.011353 (-0.003825) | 0.005274 / 0.011008 (-0.005734) | 0.073524 / 0.038508 (0.035016) | 0.033893 / 0.023109 (0.010784) | 0.335432 / 0.275898 (0.059534) | 0.379981 / 0.323480 (0.056501) | 0.005954 / 0.007986 (-0.002031) | 0.004126 / 0.004328 (-0.000203) | 0.072891 / 0.004250 (0.068641) | 0.046517 / 0.037052 (0.009465) | 0.337241 / 0.258489 (0.078752) | 0.385562 / 0.293841 (0.091721) | 0.036410 / 0.128546 (-0.092136) | 0.012246 / 0.075646 (-0.063401) | 0.085974 / 0.419271 (-0.333298) | 0.049665 / 0.043533 (0.006133) | 0.330919 / 0.255139 (0.075780) | 0.352041 / 0.283200 (0.068841) | 0.103751 / 0.141683 (-0.037931) | 1.468851 / 1.452155 (0.016696) | 1.565380 / 1.492716 (0.072663) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260431 / 0.018006 (0.242425) | 0.444554 / 0.000490 (0.444064) | 0.016055 / 0.000200 (0.015855) | 0.000283 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029130 / 0.037411 (-0.008281) | 0.112002 / 0.014526 (0.097476) | 0.120769 / 0.176557 (-0.055788) | 0.169345 / 0.737135 (-0.567790) | 0.129609 / 0.296338 (-0.166730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432211 / 0.215209 (0.217002) | 4.293008 / 2.077655 (2.215353) | 2.071291 / 1.504120 (0.567171) | 1.859322 / 1.541195 (0.318127) | 1.971434 / 1.468490 (0.502943) | 0.704042 / 4.584777 (-3.880735) | 3.791696 / 3.745712 (0.045983) | 3.142632 / 5.269862 (-2.127230) | 1.735151 / 4.565676 (-2.830525) | 0.086203 / 0.424275 (-0.338072) | 0.012542 / 0.007607 (0.004935) | 0.534870 / 0.226044 (0.308826) | 5.326042 / 2.268929 (3.057113) | 2.547960 / 55.444624 (-52.896664) | 2.212730 / 6.876477 (-4.663747) | 2.296177 / 2.142072 (0.154105) | 0.840311 / 4.805227 (-3.964917) | 0.168353 / 6.500664 (-6.332311) | 0.065949 / 0.075469 (-0.009520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255589 / 1.841788 (-0.586199) | 14.947344 / 8.074308 (6.873036) | 13.253721 / 10.191392 (3.062329) | 0.162349 / 0.680424 (-0.518075) | 0.017579 / 0.534201 (-0.516622) | 0.420758 / 0.579283 (-0.158525) | 0.430030 / 0.434364 (-0.004334) | 0.524669 / 0.540337 (-0.015669) | 0.623920 / 1.386936 (-0.763016) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35b789e8f6826b6b5a6b48fcc2416c890a1f326a \"CML watermark\")\n" ]
1,677,858,391,000
1,677,880,638,000
1,677,880,217,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5605/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5605", "html_url": "https://github.com/huggingface/datasets/pull/5605", "diff_url": "https://github.com/huggingface/datasets/pull/5605.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5605.patch", "merged_at": "2023-03-03T21:50:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/5604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5604/comments
https://api.github.com/repos/huggingface/datasets/issues/5604/events
https://github.com/huggingface/datasets/issues/5604
1,608,304,775
I_kwDODunzps5f3MiH
5,604
Problems with downloading The Pile
{ "login": "sentialx", "id": 11065386, "node_id": "MDQ6VXNlcjExMDY1Mzg2", "avatar_url": "https://avatars.githubusercontent.com/u/11065386?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sentialx", "html_url": "https://github.com/sentialx", "followers_url": "https://api.github.com/users/sentialx/followers", "following_url": "https://api.github.com/users/sentialx/following{/other_user}", "gists_url": "https://api.github.com/users/sentialx/gists{/gist_id}", "starred_url": "https://api.github.com/users/sentialx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sentialx/subscriptions", "organizations_url": "https://api.github.com/users/sentialx/orgs", "repos_url": "https://api.github.com/users/sentialx/repos", "events_url": "https://api.github.com/users/sentialx/events{/privacy}", "received_events_url": "https://api.github.com/users/sentialx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! \r\n\r\n\r\nYou can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset('the_pile', split='train', cache_dir='F:\\datasets', download_config=DownloadConfig(resume_download=True))\r\n```\r\n\r\n", "@mariosasko , I used your suggestion but its not saving anything , just stops and runs from the same point .\r\nbelow is the script to download and save on disk .\r\n\r\n```\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\n\r\n#load the Pile dataset from Hugging Face Datasets\r\n#dataset = load_dataset('the_pile')\r\ndataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n\r\n\r\n# save each file in the dataset to disk\r\nfor i, example in enumerate(dataset['train']):\r\n filename = f'pile_file_{i}.json'\r\n with open(filename, 'w') as f:\r\n f.write(str(example))\r\n\r\nprint(\"Finished saving Pile dataset files to disk.\")\r\n```\r\n", "@mariosasko , it shows nothing in dataset folder\r\n\r\n```\r\n du -sh /mnt/nlp/hugging_face/*\r\n20K /mnt/nlp/hugging_face/datasets\r\n4.0K /mnt/nlp/hugging_face/download_pile.py\r\n```\r\n", "@mariosasko \r\n\r\n```\r\nroot@d20f0ab8f4f8:/mnt/hugging_face# python3 download_pile.py\r\nNo config specified, defaulting to: the_pile/all\r\nDownloading and preparing dataset the_pile/all to /mnt/hugging_face/datasets/the_pile/all/0.0.0/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349...\r\nDownloading data files: 0%| | 0/3 [00:00<?, ?it/s]\r\n\r\n\r\n\r\n\r\n\r\nDownloading data: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 10.7G/15.2G [12:09<11:53, 6.36MB/s]\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.2G/15.2G [22:15<00:00, 7.25MB/s]\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.2G/15.2G [46:17<00:00, 5.48MB/s]\r\nDownloading data: 40%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 6.07G/15.3G [50:49<1:17:02, 1.99MB/s]\r\nTraceback (most recent call last):β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 6.07G/15.3G [50:49<25:35:23, 99.9kB/s]\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 444, in _error_catcher\r\n yield\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 567, in read\r\n data = self._fp_read(amt) if not fp_closed else b\"\"\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 525, in _fp_read\r\n data = self._fp.read(chunk_amt)\r\n File \"/usr/lib/python3.8/http/client.py\", line 459, in read\r\n n = self.readinto(b)\r\n File \"/usr/lib/python3.8/http/client.py\", line 503, in readinto\r\n n = self.fp.readinto(b)\r\n File \"/usr/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1241, in recv_into\r\n return self.read(nbytes, buffer)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1099, in read\r\n return self._sslobj.read(len, buffer)\r\nConnectionResetError: [Errno 104] Connection reset by peer\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 816, in generate\r\n yield from self.raw.stream(chunk_size, decode_content=True)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 628, in stream\r\n data = self.read(amt=amt, decode_content=decode_content)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 593, in read\r\n raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n File \"/usr/lib/python3.8/contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 461, in _error_catcher\r\n raise ProtocolError(\"Connection broken: %r\" % e, e)\r\nurllib3.exceptions.ProtocolError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"download_pile.py\", line 6, in <module>\r\n dataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 945, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/root/.cache/huggingface/modules/datasets_modules/datasets/the_pile/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349/the_pile.py\", line 192, in _split_generators\r\n data_dir = dl_manager.download(_DATA_URLS[self.config.name])\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 427, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 443, in map_nested\r\n mapped = [\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 444, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in <listcomp>\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 346, in _single_map_nested\r\n return function(data_struct)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 453, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 182, in cached_path\r\n output_path = get_from_cache(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 575, in get_from_cache\r\n http_get(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 379, in http_get\r\n for chunk in response.iter_content(chunk_size=1024):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 818, in generate\r\n raise ChunkedEncodingError(e)\r\nrequests.exceptions.ChunkedEncodingError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n```\r\n", "Users with slow internet speed are doomed (4MB/s). The dataset downloads fine at minimum speed 10MB/s.\n\nAlso, when the train splits were generated and then I removed the downloads folder to save up disk space, it started redownloading the whole dataset. Is there any way to use the already generated splits instead?", "@sentialx @mariosasko , anytime on my above script , am I downloading and saving dataset correctly . Please suggest :)" ]
1,677,837,128,000
1,680,054,245,000
1,679,661,865,000
NONE
null
### Describe the bug The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error. ![image](https://user-images.githubusercontent.com/11065386/222687870-ec5fcb65-84e8-467d-9593-4ad7bdac4d50.png) Here are the downloaded files: ![image](https://user-images.githubusercontent.com/11065386/222688200-454c2288-49e5-4682-96e6-1eb69aca0852.png) They should be all 14GB like here (https://the-eye.eu/public/AI/pile/train/). Alternatively, can I somehow download the files by myself and use the datasets preparing script? ### Steps to reproduce the bug dataset = load_dataset('the_pile', split='train', cache_dir='F:\datasets') ### Expected behavior The files should be downloaded correctly. ### Environment info - `datasets` version: 2.10.1 - Platform: Windows-10-10.0.22623-SP0 - Python version: 3.10.5 - PyArrow version: 9.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5604/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5603/comments
https://api.github.com/repos/huggingface/datasets/issues/5603/events
https://github.com/huggingface/datasets/pull/5603
1,607,143,509
PR_kwDODunzps5LJZzG
5,603
Don't compute checksums if not necessary in `datasets-cli test`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008550 / 0.011353 (-0.002803) | 0.004476 / 0.011008 (-0.006532) | 0.100902 / 0.038508 (0.062394) | 0.029684 / 0.023109 (0.006575) | 0.308081 / 0.275898 (0.032183) | 0.363435 / 0.323480 (0.039955) | 0.006987 / 0.007986 (-0.000999) | 0.003401 / 0.004328 (-0.000927) | 0.078218 / 0.004250 (0.073967) | 0.036657 / 0.037052 (-0.000395) | 0.319670 / 0.258489 (0.061181) | 0.349952 / 0.293841 (0.056111) | 0.033416 / 0.128546 (-0.095130) | 0.011511 / 0.075646 (-0.064135) | 0.323888 / 0.419271 (-0.095384) | 0.042429 / 0.043533 (-0.001104) | 0.307310 / 0.255139 (0.052171) | 0.329459 / 0.283200 (0.046259) | 0.085209 / 0.141683 (-0.056474) | 1.475893 / 1.452155 (0.023739) | 1.502782 / 1.492716 (0.010065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200137 / 0.018006 (0.182131) | 0.411269 / 0.000490 (0.410780) | 0.000415 / 0.000200 (0.000215) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022626 / 0.037411 (-0.014785) | 0.097045 / 0.014526 (0.082519) | 0.102955 / 0.176557 (-0.073602) | 0.148411 / 0.737135 (-0.588725) | 0.107238 / 0.296338 (-0.189100) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421683 / 0.215209 (0.206474) | 4.203031 / 2.077655 (2.125376) | 1.908232 / 1.504120 (0.404112) | 1.698867 / 1.541195 (0.157672) | 1.743561 / 1.468490 (0.275071) | 0.693199 / 4.584777 (-3.891578) | 3.361022 / 3.745712 (-0.384690) | 2.989610 / 5.269862 (-2.280251) | 1.533036 / 4.565676 (-3.032641) | 0.082675 / 0.424275 (-0.341601) | 0.012419 / 0.007607 (0.004812) | 0.531543 / 0.226044 (0.305499) | 5.330595 / 2.268929 (3.061666) | 2.347519 / 55.444624 (-53.097105) | 1.975672 / 6.876477 (-4.900804) | 2.039541 / 2.142072 (-0.102532) | 0.810281 / 4.805227 (-3.994946) | 0.148917 / 6.500664 (-6.351747) | 0.065441 / 0.075469 (-0.010028) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266213 / 1.841788 (-0.575574) | 13.628106 / 8.074308 (5.553798) | 13.852191 / 10.191392 (3.660799) | 0.149004 / 0.680424 (-0.531420) | 0.028549 / 0.534201 (-0.505652) | 0.399824 / 0.579283 (-0.179459) | 0.401231 / 0.434364 (-0.033133) | 0.473251 / 0.540337 (-0.067086) | 0.561094 / 1.386936 (-0.825842) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006669 / 0.011353 (-0.004684) | 0.004477 / 0.011008 (-0.006532) | 0.077514 / 0.038508 (0.039006) | 0.027489 / 0.023109 (0.004380) | 0.341935 / 0.275898 (0.066037) | 0.377392 / 0.323480 (0.053912) | 0.004947 / 0.007986 (-0.003039) | 0.004600 / 0.004328 (0.000271) | 0.075938 / 0.004250 (0.071687) | 0.039586 / 0.037052 (0.002534) | 0.344966 / 0.258489 (0.086477) | 0.392181 / 0.293841 (0.098340) | 0.031838 / 0.128546 (-0.096708) | 0.011572 / 0.075646 (-0.064075) | 0.085811 / 0.419271 (-0.333461) | 0.042250 / 0.043533 (-0.001283) | 0.345605 / 0.255139 (0.090466) | 0.367814 / 0.283200 (0.084615) | 0.090683 / 0.141683 (-0.051000) | 1.483168 / 1.452155 (0.031014) | 1.559724 / 1.492716 (0.067008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235655 / 0.018006 (0.217649) | 0.399016 / 0.000490 (0.398527) | 0.003096 / 0.000200 (0.002896) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024454 / 0.037411 (-0.012957) | 0.100710 / 0.014526 (0.086185) | 0.107950 / 0.176557 (-0.068606) | 0.161560 / 0.737135 (-0.575576) | 0.111840 / 0.296338 (-0.184498) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441362 / 0.215209 (0.226153) | 4.428105 / 2.077655 (2.350450) | 2.074501 / 1.504120 (0.570381) | 1.866672 / 1.541195 (0.325477) | 1.928266 / 1.468490 (0.459776) | 0.703561 / 4.584777 (-3.881216) | 3.396537 / 3.745712 (-0.349175) | 3.047369 / 5.269862 (-2.222492) | 1.595133 / 4.565676 (-2.970543) | 0.084028 / 0.424275 (-0.340247) | 0.012349 / 0.007607 (0.004741) | 0.539354 / 0.226044 (0.313310) | 5.401535 / 2.268929 (3.132606) | 2.499874 / 55.444624 (-52.944750) | 2.161406 / 6.876477 (-4.715071) | 2.197385 / 2.142072 (0.055313) | 0.810864 / 4.805227 (-3.994363) | 0.152277 / 6.500664 (-6.348387) | 0.067266 / 0.075469 (-0.008203) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280900 / 1.841788 (-0.560887) | 13.815731 / 8.074308 (5.741423) | 13.007438 / 10.191392 (2.816046) | 0.129711 / 0.680424 (-0.550713) | 0.016852 / 0.534201 (-0.517349) | 0.380775 / 0.579283 (-0.198508) | 0.384143 / 0.434364 (-0.050221) | 0.459954 / 0.540337 (-0.080383) | 0.549335 / 1.386936 (-0.837601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8805d67bd81ce48f481d5c1e56b84e6ebcaa2b2b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009570 / 0.011353 (-0.001783) | 0.005219 / 0.011008 (-0.005789) | 0.098472 / 0.038508 (0.059964) | 0.035429 / 0.023109 (0.012320) | 0.303086 / 0.275898 (0.027188) | 0.365926 / 0.323480 (0.042446) | 0.008797 / 0.007986 (0.000811) | 0.004220 / 0.004328 (-0.000108) | 0.076670 / 0.004250 (0.072419) | 0.045596 / 0.037052 (0.008543) | 0.309476 / 0.258489 (0.050987) | 0.343958 / 0.293841 (0.050117) | 0.038741 / 0.128546 (-0.089805) | 0.011990 / 0.075646 (-0.063657) | 0.332326 / 0.419271 (-0.086945) | 0.048897 / 0.043533 (0.005364) | 0.296002 / 0.255139 (0.040863) | 0.322048 / 0.283200 (0.038849) | 0.104403 / 0.141683 (-0.037280) | 1.461777 / 1.452155 (0.009622) | 1.516362 / 1.492716 (0.023645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201565 / 0.018006 (0.183559) | 0.435781 / 0.000490 (0.435291) | 0.004215 / 0.000200 (0.004015) | 0.000282 / 0.000054 (0.000227) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027272 / 0.037411 (-0.010139) | 0.106157 / 0.014526 (0.091631) | 0.116948 / 0.176557 (-0.059609) | 0.160404 / 0.737135 (-0.576731) | 0.122518 / 0.296338 (-0.173820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397721 / 0.215209 (0.182512) | 3.966433 / 2.077655 (1.888778) | 1.755410 / 1.504120 (0.251290) | 1.566480 / 1.541195 (0.025285) | 1.623684 / 1.468490 (0.155194) | 0.696820 / 4.584777 (-3.887957) | 3.750437 / 3.745712 (0.004725) | 2.105875 / 5.269862 (-3.163986) | 1.442026 / 4.565676 (-3.123650) | 0.085026 / 0.424275 (-0.339249) | 0.012239 / 0.007607 (0.004632) | 0.502613 / 0.226044 (0.276569) | 5.049016 / 2.268929 (2.780087) | 2.314499 / 55.444624 (-53.130126) | 1.967943 / 6.876477 (-4.908534) | 2.033507 / 2.142072 (-0.108565) | 0.861908 / 4.805227 (-3.943319) | 0.167784 / 6.500664 (-6.332880) | 0.063022 / 0.075469 (-0.012447) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210434 / 1.841788 (-0.631353) | 14.979319 / 8.074308 (6.905011) | 14.095263 / 10.191392 (3.903871) | 0.174203 / 0.680424 (-0.506221) | 0.028547 / 0.534201 (-0.505654) | 0.442509 / 0.579283 (-0.136774) | 0.445811 / 0.434364 (0.011447) | 0.531313 / 0.540337 (-0.009024) | 0.636541 / 1.386936 (-0.750395) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007341 / 0.011353 (-0.004012) | 0.005197 / 0.011008 (-0.005811) | 0.075413 / 0.038508 (0.036905) | 0.033261 / 0.023109 (0.010152) | 0.339596 / 0.275898 (0.063698) | 0.376051 / 0.323480 (0.052571) | 0.005827 / 0.007986 (-0.002159) | 0.005473 / 0.004328 (0.001144) | 0.074851 / 0.004250 (0.070600) | 0.049059 / 0.037052 (0.012007) | 0.357182 / 0.258489 (0.098693) | 0.384589 / 0.293841 (0.090748) | 0.037122 / 0.128546 (-0.091424) | 0.012298 / 0.075646 (-0.063348) | 0.088191 / 0.419271 (-0.331081) | 0.052002 / 0.043533 (0.008469) | 0.343216 / 0.255139 (0.088077) | 0.364534 / 0.283200 (0.081334) | 0.105462 / 0.141683 (-0.036221) | 1.486717 / 1.452155 (0.034562) | 1.584725 / 1.492716 (0.092009) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199210 / 0.018006 (0.181203) | 0.439069 / 0.000490 (0.438580) | 0.000436 / 0.000200 (0.000236) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029931 / 0.037411 (-0.007480) | 0.109564 / 0.014526 (0.095038) | 0.122284 / 0.176557 (-0.054273) | 0.170819 / 0.737135 (-0.566317) | 0.125886 / 0.296338 (-0.170452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422724 / 0.215209 (0.207515) | 4.210304 / 2.077655 (2.132650) | 2.001481 / 1.504120 (0.497361) | 1.810818 / 1.541195 (0.269623) | 1.901367 / 1.468490 (0.432877) | 0.686004 / 4.584777 (-3.898773) | 3.768850 / 3.745712 (0.023138) | 2.079501 / 5.269862 (-3.190360) | 1.326970 / 4.565676 (-3.238706) | 0.085991 / 0.424275 (-0.338284) | 0.012298 / 0.007607 (0.004690) | 0.526878 / 0.226044 (0.300833) | 5.267241 / 2.268929 (2.998312) | 2.451781 / 55.444624 (-52.992843) | 2.109143 / 6.876477 (-4.767333) | 2.185426 / 2.142072 (0.043353) | 0.830165 / 4.805227 (-3.975063) | 0.166167 / 6.500664 (-6.334497) | 0.064077 / 0.075469 (-0.011392) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270430 / 1.841788 (-0.571358) | 14.844852 / 8.074308 (6.770544) | 13.196672 / 10.191392 (3.005280) | 0.162853 / 0.680424 (-0.517571) | 0.017727 / 0.534201 (-0.516474) | 0.424803 / 0.579283 (-0.154480) | 0.439970 / 0.434364 (0.005606) | 0.530691 / 0.540337 (-0.009647) | 0.630474 / 1.386936 (-0.756462) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#24fb01b720ef4203d4ae6225f43cba912b1f6d55 \"CML watermark\")\n" ]
1,677,775,359,000
1,677,858,332,000
1,677,857,908,000
MEMBER
null
we only need them if there exists a `dataset_infos.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5603/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5603", "html_url": "https://github.com/huggingface/datasets/pull/5603", "diff_url": "https://github.com/huggingface/datasets/pull/5603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5603.patch", "merged_at": "2023-03-03T15:38:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/5602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5602/comments
https://api.github.com/repos/huggingface/datasets/issues/5602/events
https://github.com/huggingface/datasets/pull/5602
1,607,054,110
PR_kwDODunzps5LJGfa
5,602
Return dict structure if columns are lists - to_tf_dataset
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5602). All of your documentation changes will be reflected on that endpoint.", "This is a great PR! Thinking about the UX though, maybe we could do it without the extra argument? Before this PR, the logic in `to_tf_dataset` was that if the user passed a single column name in either `columns` or `label_cols`, we converted it to a length-1 list. Then, later in the code, we convert output dicts with only one key to naked Tensors.\r\n\r\nWould it be easier if we removed the argument, but instead treated the cases differently? Passing a column name as a string could yield a single naked Tensor in the output as before, but passing a list of length 1 would yield a full dict? That way if you wanted dict output with a single key you could just say `columns=[col_name]`.\r\n\r\n(I'm not totally convinced this is a good idea yet, it just seems like it might be more intuitive)", "@Rocketknight1 Happy to implement it that way - it's certainly cleaner to not have another arg. In this case, am I right in saying we'd effectively set `return_dict` [here](https://github.com/huggingface/datasets/blob/6569014a9948eab7d031a3587405e64ba92d6c59/src/datasets/arrow_dataset.py#L410) - where columns are made into a list if they were a string? \r\n\r\nThere only concern I have is this changes the default behaviour, which might break things for people who were happily using `columns=[\"my_col_str\"]` before. \r\n\r\n\r\n", "@amyeroberts That's correct! Probably the simplest way to implement it would be to just add the flag there.\r\n\r\nAnd yeah, I'm aware this might be a slightly breaking change, but we've mostly tried to move users to `prepare_tf_dataset` in `transformers` at this point, so hopefully as long as that method doesn't break then most users won't be negatively affected by the change.", "@lhoestq @Rocketknight1 - I've remove the `return_dict` argument and implemented @Rocketknight1 's suggestion. LMK what you think :) ", "@lhoestq Of course :) I've opened a draft PR here for the updates needed in transformers examples and docs to keep the returned data structure consistent: https://github.com/huggingface/transformers/pull/21935. Note: even with the different structure, `model.fit` can still successfully be called. \r\n\r\nFor the [link you shared](https://github.com/huggingface/datasets/pull/url) - for me it returns a 404 error. Is there another link I could follow to see how to run the transformers CI with this branch? \r\n\r\nCurrently looking into the failing tests 😭 ", "Oh sorry - I fixed the URL: https://github.com/huggingface/transformers/commit/4eb55bbd593adf2e49362613ee32a11ddc4a854d", "The error shows `There appear to be 80 leaked shared_memory objects to clean up at shutdown`. IIRC to_tf_dataset does some shared memory stuff for multiprocessing - maybe @Rocketknight1 you know what's going on ?", "@lhoestq That warning appears anytime you interrupt a process using Python `SharedMemory` objects - it's only a problem if you still get the error when the process finishes normally! Our implementation of `to_tf_dataset` should clean things up properly.", "Ok, not sure why it fails then :/", "Hmm, will investigate! Sorry, I misread - I thought that warning was coming up in the context of another error", "IMO outputing different types based on nuances in the input could confuse users.\r\n\r\nAlso, in the ideal scenario,`to_tf_function` should return a `tf.data.Dataset` that iterates over the underlying Arrow data and yields (unprocessed) dicts of TF tensors, and all the model-specific code should live in Transformers (e.g., in `prepare_tf_dataset`). So the goal would be to make `to_tf_dataset` more user-friendly, not more complex :).", "I think we agree @mariosasko :) \r\n\r\n> Also, in the ideal scenario,to_tf_function should return a tf.data.Dataset that iterates over the underlying Arrow data and yields (unprocessed) dicts of TF tensors\r\n\r\nThis I'll leave for another PR as it's outside the scope of this one and @Rocketknight1 will have far more knowledge and ideas about what is possible\r\n\r\n> all the model-specific code should live in Transformers (e.g., in prepare_tf_dataset\r\n\r\nAgreed! This PR isn't really a model specific change - although it was highlighted when trying to train a model. We definitely want to move model specific things out of datasets as much as possible. \r\n\r\n> IMO outputing different types based on nuances in the input could confuse users.\r\n> So the goal would be to make to_tf_dataset more user-friendly, not more complex :).\r\n\r\nThe aim was to move more towards being able to return the dict of TF tensors you suggest, whilst maintaining backwards compatibility. Personally, I found it surprising to be returned a tuple structure when I was using `to_tf_dataset`. The aim was to make `to_tf_dataset` more user friendly, but I agree that it has the potential to be confusing. \r\n\r\nFor context, the thought process behind this design was to: \r\n* Not add even more arguments to `to_tf_dataset`. \r\n* Have a feature selection -> return type logic in keeping with `datasets` e.g. `dataset['train'][:10]['feat1']` returns a list of values, whereas `dataset['train'][:10]['feat1', 'feat2']` returns a dictionary. \r\n\r\nVery happy to add any suggestions or changes you might have about how to make this design better! :) \r\n", "Hi ! Anything blocking here ? I'b be happy to help", "Hi @lhoestq - sorry this hasn't been very active for the past ~1.5 weeks. There's nothing specific blocking, other than not being able to replicate without running on CI, and still need to test a bit more to narrow down the issue. I should have time tomorrow to pick it up again :) ", "@lhoestq @Rocketknight1 Friendly ping for a review :) ", "Awesome ! What about showing a warning that this change is about to happen in the next version of `datasets`, and then apply this change in a subsequent major release ? This way folks at twitter won't hate us: https://github.com/twitter/the-algorithm/blob/138bb519975407d4ea0dc1478d897d451ef05dab/trust_and_safety_models/toxicity/data/mb_generator.py#L142-L148", "@lhoestq Sounds good! How would you like this warning to happen? I could open a PR to add a warning message within `to_tf_dataset`?", "Yup sounds good :)" ]
1,677,772,272,000
1,681,314,893,000
null
CONTRIBUTOR
null
This PR introduces new logic to `to_tf_dataset` affecting the returned data structure, enabling a dictionary structure to be returned, even if only one feature column is selected. If the passed in `columns` or `label_cols` to `to_tf_dataset` are a list, they are returned as a dictionary, respectively. If they are a string, the tensor is returned. An outline of the behaviour: ``` dataset,to_tf_dataset(columns=["col_1"], label_cols="col_2") # ({'col_1': col_1}, col_2} dataset,to_tf_dataset(columns="col1", label_cols="col_2") # (col1, col2) dataset,to_tf_dataset(columns="col1") # col1 dataset,to_tf_dataset(columns=["col_1"], labels=["col_2"]) # ({'col1': tensor}, {'col2': tensor}} dataset,to_tf_dataset(columns="col_1", labels=["col_2"]) # (col1, {'col2': tensor}} ``` ## Motivation Currently, when calling `to_tf_dataset`, the returned dataset will always return a tuple structure if a single feature column is used. This can cause issues when calling `model.fit` on models which train without labels e.g. [TFVitMAEForPreTraining](https://github.com/huggingface/transformers/blob/b6f47b539377ac1fd845c7adb4ccaa5eb514e126/src/transformers/models/vit_mae/modeling_vit_mae.py#L849). Specifically, [this line](https://github.com/huggingface/transformers/blob/d9e28d91a8b2d09b51a33155d3a03ad9fcfcbd1f/src/transformers/modeling_tf_utils.py#L1521) where it's assumed the input `x` is a dictionary if there is no label. ## Example Previous behaviour ```python In [1]: import tensorflow as tf ...: from datasets import load_dataset ...: ...: ...: def transform(batch): ...: def _transform_img(img): ...: img = img.convert("RGB") ...: img = tf.keras.utils.img_to_array(img) ...: img = tf.image.resize(img, (224, 224)) ...: img /= 255.0 ...: img = tf.transpose(img, perm=[2, 0, 1]) ...: return img ...: batch['pixel_values'] = [_transform_img(pil_img) for pil_img in batch['img']] ...: return batch ...: ...: ...: def collate_fn(examples): ...: pixel_values = tf.stack([example["pixel_values"] for example in examples]) ...: return {"pixel_values": pixel_values} ...: ...: ...: dataset = load_dataset('cifar10')['train'] ...: dataset = dataset.with_transform(transform) ...: dataset.to_tf_dataset(batch_size=8, columns=['pixel_values'], collate_fn=collate_fn) Out[1]: <PrefetchDataset element_spec=TensorSpec(shape=(None, 3, 224, 224), dtype=tf.float32, name=None)> ``` New behaviour ```python In [1]: import tensorflow as tf ...: from datasets import load_dataset ...: ...: ...: def transform(batch): ...: def _transform_img(img): ...: img = img.convert("RGB") ...: img = tf.keras.utils.img_to_array(img) ...: img = tf.image.resize(img, (224, 224)) ...: img /= 255.0 ...: img = tf.transpose(img, perm=[2, 0, 1]) ...: return img ...: batch['pixel_values'] = [_transform_img(pil_img) for pil_img in batch['img']] ...: return batch ...: ...: ...: def collate_fn(examples): ...: pixel_values = tf.stack([example["pixel_values"] for example in examples]) ...: return {"pixel_values": pixel_values} ...: ...: ...: dataset = load_dataset('cifar10')['train'] ...: dataset = dataset.with_transform(transform) ...: dataset.to_tf_dataset(batch_size=8, columns=['pixel_values'], collate_fn=collate_fn) Out[1]: <PrefetchDataset element_spec={'pixel_values': TensorSpec(shape=(None, 3, 224, 224), dtype=tf.float32, name=None)}> ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5602/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5602", "html_url": "https://github.com/huggingface/datasets/pull/5602", "diff_url": "https://github.com/huggingface/datasets/pull/5602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5602.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5601/comments
https://api.github.com/repos/huggingface/datasets/issues/5601/events
https://github.com/huggingface/datasets/issues/5601
1,606,685,976
I_kwDODunzps5fxBUY
5,601
Authorization error
{ "login": "OleksandrKorovii", "id": 107404835, "node_id": "U_kgDOBmbeIw", "avatar_url": "https://avatars.githubusercontent.com/u/107404835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OleksandrKorovii", "html_url": "https://github.com/OleksandrKorovii", "followers_url": "https://api.github.com/users/OleksandrKorovii/followers", "following_url": "https://api.github.com/users/OleksandrKorovii/following{/other_user}", "gists_url": "https://api.github.com/users/OleksandrKorovii/gists{/gist_id}", "starred_url": "https://api.github.com/users/OleksandrKorovii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OleksandrKorovii/subscriptions", "organizations_url": "https://api.github.com/users/OleksandrKorovii/orgs", "repos_url": "https://api.github.com/users/OleksandrKorovii/repos", "events_url": "https://api.github.com/users/OleksandrKorovii/events{/privacy}", "received_events_url": "https://api.github.com/users/OleksandrKorovii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! \r\n\r\nIt's better to report this kind of issue in the `huggingface_hub` repo, so if you still haven't resolved it, I suggest you open an issue there.", "Yeah, I solved it. Problem was in osxkeychain. When I do `hugginface-cli login` it's add token with default account (username)`hg_user` but my repo contain other username. When I changed username in keychain - it works now." ]
1,677,758,919,000
1,678,812,935,000
1,678,812,934,000
NONE
null
### Describe the bug Get `Authorization error` when try to push data into hugginface datasets hub. ### Steps to reproduce the bug I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share), 1. `huggingface-cli login` with WRITE token 2. `git lfs install` 3. `git clone https://huggingface.co/datasets/namespace/your_dataset_name` 4. ``` cp /somewhere/data/*.json . git lfs track *.json git add .gitattributes git add *.json git commit -m "add json files" ``` but when I execute `git push` I got the error: ``` Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. batch response: Authorization error. error: failed to push some refs to 'https://huggingface.co/datasets/zeusfsx/ukrainian-news' ``` Size of data ~100Gb. I have five json files - different parts. ### Expected behavior All my data pushed into hub ### Environment info - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.10.10 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5601/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5600/comments
https://api.github.com/repos/huggingface/datasets/issues/5600/events
https://github.com/huggingface/datasets/issues/5600
1,606,585,596
I_kwDODunzps5fwoz8
5,600
Dataloader getitem not working for DreamboothDatasets
{ "login": "salahiguiliz", "id": 76955987, "node_id": "MDQ6VXNlcjc2OTU1OTg3", "avatar_url": "https://avatars.githubusercontent.com/u/76955987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/salahiguiliz", "html_url": "https://github.com/salahiguiliz", "followers_url": "https://api.github.com/users/salahiguiliz/followers", "following_url": "https://api.github.com/users/salahiguiliz/following{/other_user}", "gists_url": "https://api.github.com/users/salahiguiliz/gists{/gist_id}", "starred_url": "https://api.github.com/users/salahiguiliz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salahiguiliz/subscriptions", "organizations_url": "https://api.github.com/users/salahiguiliz/orgs", "repos_url": "https://api.github.com/users/salahiguiliz/repos", "events_url": "https://api.github.com/users/salahiguiliz/events{/privacy}", "received_events_url": "https://api.github.com/users/salahiguiliz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data." ]
1,677,754,827,000
1,678,730,375,000
1,678,730,375,000
NONE
null
### Describe the bug Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529)) moving Datasets to 2.8.0 solved the issue. ### Steps to reproduce the bug 1- using DreamBoothDataset to load some images 2- error after loading when trying to visualise the images ### Expected behavior I was expecting a numpy array of the image ### Environment info - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5600/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5598/comments
https://api.github.com/repos/huggingface/datasets/issues/5598/events
https://github.com/huggingface/datasets/pull/5598
1,605,018,478
PR_kwDODunzps5LCMiX
5,598
Fix push_to_hub with no dataset_infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008823 / 0.011353 (-0.002529) | 0.004738 / 0.011008 (-0.006270) | 0.102338 / 0.038508 (0.063830) | 0.030603 / 0.023109 (0.007494) | 0.302995 / 0.275898 (0.027097) | 0.362080 / 0.323480 (0.038600) | 0.007096 / 0.007986 (-0.000889) | 0.003493 / 0.004328 (-0.000835) | 0.079129 / 0.004250 (0.074878) | 0.037966 / 0.037052 (0.000914) | 0.310412 / 0.258489 (0.051923) | 0.346740 / 0.293841 (0.052899) | 0.033795 / 0.128546 (-0.094751) | 0.011595 / 0.075646 (-0.064051) | 0.325189 / 0.419271 (-0.094083) | 0.041679 / 0.043533 (-0.001854) | 0.302339 / 0.255139 (0.047200) | 0.322519 / 0.283200 (0.039319) | 0.089058 / 0.141683 (-0.052625) | 1.496223 / 1.452155 (0.044068) | 1.512562 / 1.492716 (0.019845) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009298 / 0.018006 (-0.008709) | 0.406726 / 0.000490 (0.406236) | 0.003753 / 0.000200 (0.003553) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023327 / 0.037411 (-0.014084) | 0.098175 / 0.014526 (0.083649) | 0.106040 / 0.176557 (-0.070516) | 0.151934 / 0.737135 (-0.585201) | 0.108465 / 0.296338 (-0.187873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419073 / 0.215209 (0.203864) | 4.188012 / 2.077655 (2.110358) | 1.857667 / 1.504120 (0.353547) | 1.664124 / 1.541195 (0.122929) | 1.704341 / 1.468490 (0.235851) | 0.699671 / 4.584777 (-3.885106) | 3.391110 / 3.745712 (-0.354602) | 1.871136 / 5.269862 (-3.398725) | 1.176794 / 4.565676 (-3.388882) | 0.083322 / 0.424275 (-0.340953) | 0.012450 / 0.007607 (0.004843) | 0.525058 / 0.226044 (0.299014) | 5.265425 / 2.268929 (2.996497) | 2.320672 / 55.444624 (-53.123952) | 1.964806 / 6.876477 (-4.911671) | 2.027055 / 2.142072 (-0.115017) | 0.819768 / 4.805227 (-3.985459) | 0.149638 / 6.500664 (-6.351026) | 0.064774 / 0.075469 (-0.010695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204575 / 1.841788 (-0.637212) | 13.651878 / 8.074308 (5.577570) | 13.751973 / 10.191392 (3.560581) | 0.154781 / 0.680424 (-0.525643) | 0.028887 / 0.534201 (-0.505314) | 0.404905 / 0.579283 (-0.174379) | 0.411320 / 0.434364 (-0.023043) | 0.485026 / 0.540337 (-0.055311) | 0.579690 / 1.386936 (-0.807246) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006615 / 0.011353 (-0.004737) | 0.004606 / 0.011008 (-0.006402) | 0.076099 / 0.038508 (0.037591) | 0.027247 / 0.023109 (0.004137) | 0.360731 / 0.275898 (0.084833) | 0.393688 / 0.323480 (0.070208) | 0.005079 / 0.007986 (-0.002906) | 0.003345 / 0.004328 (-0.000984) | 0.077184 / 0.004250 (0.072934) | 0.037850 / 0.037052 (0.000797) | 0.379738 / 0.258489 (0.121249) | 0.400474 / 0.293841 (0.106633) | 0.031581 / 0.128546 (-0.096966) | 0.011508 / 0.075646 (-0.064138) | 0.084966 / 0.419271 (-0.334306) | 0.041740 / 0.043533 (-0.001793) | 0.349887 / 0.255139 (0.094748) | 0.384405 / 0.283200 (0.101205) | 0.089022 / 0.141683 (-0.052661) | 1.503448 / 1.452155 (0.051293) | 1.564870 / 1.492716 (0.072154) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233581 / 0.018006 (0.215574) | 0.413819 / 0.000490 (0.413330) | 0.000398 / 0.000200 (0.000198) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024805 / 0.037411 (-0.012607) | 0.101348 / 0.014526 (0.086822) | 0.108701 / 0.176557 (-0.067856) | 0.160011 / 0.737135 (-0.577124) | 0.111696 / 0.296338 (-0.184642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436303 / 0.215209 (0.221094) | 4.368684 / 2.077655 (2.291029) | 2.082366 / 1.504120 (0.578247) | 1.888108 / 1.541195 (0.346913) | 1.958295 / 1.468490 (0.489804) | 0.700858 / 4.584777 (-3.883919) | 3.408321 / 3.745712 (-0.337391) | 1.872960 / 5.269862 (-3.396902) | 1.165116 / 4.565676 (-3.400560) | 0.083556 / 0.424275 (-0.340719) | 0.012348 / 0.007607 (0.004741) | 0.536551 / 0.226044 (0.310506) | 5.359974 / 2.268929 (3.091045) | 2.539043 / 55.444624 (-52.905581) | 2.200314 / 6.876477 (-4.676162) | 2.222051 / 2.142072 (0.079979) | 0.808567 / 4.805227 (-3.996661) | 0.151222 / 6.500664 (-6.349442) | 0.066351 / 0.075469 (-0.009118) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265502 / 1.841788 (-0.576286) | 13.692066 / 8.074308 (5.617758) | 13.124507 / 10.191392 (2.933115) | 0.129545 / 0.680424 (-0.550879) | 0.016827 / 0.534201 (-0.517374) | 0.380326 / 0.579283 (-0.198957) | 0.387268 / 0.434364 (-0.047096) | 0.463722 / 0.540337 (-0.076616) | 0.553681 / 1.386936 (-0.833255) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6569014a9948eab7d031a3587405e64ba92d6c59 \"CML watermark\")\n" ]
1,677,678,846,000
1,677,764,833,000
1,677,764,417,000
MEMBER
null
As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags cc @clefourrier
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5598/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5598/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5598", "html_url": "https://github.com/huggingface/datasets/pull/5598", "diff_url": "https://github.com/huggingface/datasets/pull/5598.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5598.patch", "merged_at": "2023-03-02T13:40:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/5597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5597/comments
https://api.github.com/repos/huggingface/datasets/issues/5597/events
https://github.com/huggingface/datasets/issues/5597
1,604,928,721
I_kwDODunzps5fqUTR
5,597
in-place dataset update
{ "login": "speedcell4", "id": 3585459, "node_id": "MDQ6VXNlcjM1ODU0NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/speedcell4", "html_url": "https://github.com/speedcell4", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "repos_url": "https://api.github.com/users/speedcell4/repos", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
[ "We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not loaded in memory, and therefore the new dataset actually use the same buffers as the old one.", "Thank you for your detailed reply.\r\n\r\n> In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nI understand this, but it still copies the old dataset to create the new one, is this correct? So maybe it is not memory-consuming, but time-consuming?", "Indeed, and because of that it is more efficient to add multiple rows at once instead of one by one, using `concatenate_datasets` for example." ]
1,677,675,498,000
1,677,763,841,000
1,677,728,820,000
NONE
null
### Motivation For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this. ```python from datasets import Dataset ds = Dataset.from_list([]) ds.add_item({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: [], >>> num_rows: 0 >>> }) ds = ds.add_item({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: ['a', 'b'], >>> num_rows: 1 >>> }) ``` ### Feature request Call for in-place dataset update functions, that update the existing `Dataset` in place without creating a new copy. The interface is supposed to keep the same style as PyTorch, such as the in-place version of a `function` is named `function_`. For example, the in-pace version of `add_item`, i.e., `add_item_`, immediately updates the `Dataset`. ```python from datasets import Dataset ds = Dataset.from_list([]) ds.add_item({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: [], >>> num_rows: 0 >>> }) ds.add_item_({'a': [1, 2, 3], 'b': 4}) print(ds) >>> Dataset({ >>> features: ['a', 'b'], >>> num_rows: 1 >>> }) ``` ### Related Functions * `.map` * `.filter` * `.add_item`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5597/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5596/comments
https://api.github.com/repos/huggingface/datasets/issues/5596/events
https://github.com/huggingface/datasets/issues/5596
1,604,919,993
I_kwDODunzps5fqSK5
5,596
[TypeError: Couldn't cast array of type] Can only load a subset of the dataset
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because β€œlabels” is missing in the data", "We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks!", "A similar error occurs in the Pile dataset (EleutherAI/the_pile)\r\n\r\nLoading the dataset produces the following error.\r\n\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<file: string, id: string>\r\nto\r\n{'id': Value(dtype='string', id=None)}\r\n```\r\n", "I think this was fixed in https://huggingface.co/datasets/EleutherAI/the_pile/discussions/11" ]
1,677,675,188,000
1,681,899,577,000
1,677,755,531,000
NONE
null
### Describe the bug I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error: ``` casted_values = _c(array.values, feature[0]) File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wrapper return func(array, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 2132, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<type: string, action: string, datetime: timestamp[s], author: string, title: string, description: string, comment_id: int64, comment: string, labels: list<item: string>> to {'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)} ``` But I can succesfully load a subset of the dataset, for example this works: ```python ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train", data_files=[f"data/data-{x}.jsonl" for x in range(10)]) ``` and `ds.features` returns: ``` {'repo': Value(dtype='string', id=None), 'org': Value(dtype='string', id=None), 'issue_id': Value(dtype='int64', id=None), 'issue_number': Value(dtype='int64', id=None), 'pull_request': {'user_login': Value(dtype='string', id=None), 'repo': Value(dtype='string', id=None), 'number': Value(dtype='int64', id=None)}, 'events': [{'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)}]} ``` So I'm not sure if there's an issue with just some of the files. Grateful if you have any suggestions to fix the issue. Side note: I saw this related [issue](https://github.com/huggingface/datasets/issues/3637) and tried to write a loading script to have `events` as a `Sequence` and not `list` [here](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/blob/main/loading.py) (the script was renamed). It worked with a subset locally but doesn't for the remote dataset it can't find https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/resolve/main/data. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train") ``` ### Expected behavior Load the entire dataset succesfully. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5596/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5595/comments
https://api.github.com/repos/huggingface/datasets/issues/5595/events
https://github.com/huggingface/datasets/pull/5595
1,604,070,629
PR_kwDODunzps5K--V9
5,595
Unpins sqlAlchemy
{ "login": "lazarust", "id": 46943923, "node_id": "MDQ6VXNlcjQ2OTQzOTIz", "avatar_url": "https://avatars.githubusercontent.com/u/46943923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lazarust", "html_url": "https://github.com/lazarust", "followers_url": "https://api.github.com/users/lazarust/followers", "following_url": "https://api.github.com/users/lazarust/following{/other_user}", "gists_url": "https://api.github.com/users/lazarust/gists{/gist_id}", "starred_url": "https://api.github.com/users/lazarust/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lazarust/subscriptions", "organizations_url": "https://api.github.com/users/lazarust/orgs", "repos_url": "https://api.github.com/users/lazarust/repos", "events_url": "https://api.github.com/users/lazarust/events{/privacy}", "received_events_url": "https://api.github.com/users/lazarust/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5595). All of your documentation changes will be reflected on that endpoint.", "It looks like this issue hasn't been fixed yet, so let's wait a bit more.", "@lazarust thanks for your work, but unfortunately we cannot merge it.\r\n\r\nSee my comment in: https://github.com/huggingface/datasets/issues/5477#issuecomment-1495512688\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-2.0.0`:\r\n- https://github.com/pandas-dev/pandas/releases/tag/v2.0.0\r\n\r\nbut it will not be back-ported to `pandas-1`:\r\n- https://github.com/pandas-dev/pandas/pull/48576#issuecomment-1466467159\r\n\r\nAlso note that `pandas-2.0.0` dropped support for Python 3.7:\r\n- https://github.com/pandas-dev/pandas/issues/41678\r\n- https://github.com/pandas-dev/pandas/pull/41989\r\n\r\nTherefore, we cannot unpin `sqlalchemy` until we drop support for Python 3.7 (these Python users cannot use `pandas-2`). See our latest CI checks below:\r\n- \"CI / test\" fails because it runs on Python 3.7\r\n- \"CI / test_py310\" succeeds because it runs on Python 3.10 " ]
1,677,634,425,000
1,680,596,419,000
1,680,596,354,000
NONE
null
Closes #5477
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5595/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5595", "html_url": "https://github.com/huggingface/datasets/pull/5595", "diff_url": "https://github.com/huggingface/datasets/pull/5595.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5595.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5594/comments
https://api.github.com/repos/huggingface/datasets/issues/5594/events
https://github.com/huggingface/datasets/issues/5594
1,603,980,995
I_kwDODunzps5fms7D
5,594
Error while downloading the xtreme udpos dataset
{ "login": "simran-khanuja", "id": 24687672, "node_id": "MDQ6VXNlcjI0Njg3Njcy", "avatar_url": "https://avatars.githubusercontent.com/u/24687672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simran-khanuja", "html_url": "https://github.com/simran-khanuja", "followers_url": "https://api.github.com/users/simran-khanuja/followers", "following_url": "https://api.github.com/users/simran-khanuja/following{/other_user}", "gists_url": "https://api.github.com/users/simran-khanuja/gists{/gist_id}", "starred_url": "https://api.github.com/users/simran-khanuja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simran-khanuja/subscriptions", "organizations_url": "https://api.github.com/users/simran-khanuja/orgs", "repos_url": "https://api.github.com/users/simran-khanuja/repos", "events_url": "https://api.github.com/users/simran-khanuja/events{/privacy}", "received_events_url": "https://api.github.com/users/simran-khanuja/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir, download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n```", "Hi! Apologies for the delayed response! I tried the above and it doesn't solve the issue. Actually, the dataset gets downloaded most times, but sometimes this error occurs (at random afaik). Is it possible that there is a server issue for this particular dataset? I am able to download other datasets using the same code on the same machine with no issues :( I get this error now : \r\n```\r\nDownloading data: 16%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 55.9M/355M [04:45<25:25, 196kB/s]\r\nTraceback (most recent call last):\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 1107, in <module>\r\n main()\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 439, in main\r\n en_dataset = load_dataset(\"xtreme\", \"udpos.English\", split=\"train\", download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 949, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/utils/info_utils.py\", line 62, in verify_checksums\r\n raise NonMatchingChecksumError(\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-3105/ud-treebanks-v2.5.tgz']\r\nSet `verification_mode='no_checks'` to skip checksums verification and ignore this error\r\n```" ]
1,677,627,653,000
1,678,730,313,000
null
NONE
null
### Describe the bug Hi, I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed ```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4... Downloading data: 16%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 56.9M/355M [03:11<16:43, 297kB/s] Generating train split: 0%| | 0/6075 [00:00<?, ? examples/s]Traceback (most recent call last): File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1608, in _prepare_split_single for key, record in generator: File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 732, in _generate_examples yield from UdposParser.generate_examples(config=self.config, filepath=filepath, **kwargs) File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 921, in generate_examples for path, file in filepath: File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 158, in __iter__ yield from self.generator(*self.args, **self.kwargs) File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 211, in _iter_from_path yield from cls._iter_tar(f) File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 167, in _iter_tar for tarinfo in stream: File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2475, in __iter__ tarinfo = self.next() File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2344, in next raise ReadError("unexpected end of data") tarfile.ReadError: unexpected end of data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 855, in <module> main() File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 487, in main train_dataset = load_dataset(dataset_name, source_language, split="train", cache_dir=args.cache_dir, download_mode="force_redownload") File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 872, in download_and_prepare self._download_and_prepare( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 967, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1488, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug ``` train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode="force_redownload") ``` ### Expected behavior Download the udpos dataset ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5594/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5592/comments
https://api.github.com/repos/huggingface/datasets/issues/5592/events
https://github.com/huggingface/datasets/pull/5592
1,603,619,124
PR_kwDODunzps5K9dWr
5,592
Fix docstring example
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009526 / 0.011353 (-0.001827) | 0.005132 / 0.011008 (-0.005876) | 0.101312 / 0.038508 (0.062804) | 0.035703 / 0.023109 (0.012594) | 0.301788 / 0.275898 (0.025890) | 0.368411 / 0.323480 (0.044932) | 0.008163 / 0.007986 (0.000177) | 0.005462 / 0.004328 (0.001134) | 0.077282 / 0.004250 (0.073031) | 0.044139 / 0.037052 (0.007086) | 0.312280 / 0.258489 (0.053791) | 0.351870 / 0.293841 (0.058029) | 0.038266 / 0.128546 (-0.090281) | 0.012051 / 0.075646 (-0.063595) | 0.335109 / 0.419271 (-0.084163) | 0.047596 / 0.043533 (0.004064) | 0.300931 / 0.255139 (0.045792) | 0.325705 / 0.283200 (0.042505) | 0.100472 / 0.141683 (-0.041211) | 1.475037 / 1.452155 (0.022882) | 1.520059 / 1.492716 (0.027343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211096 / 0.018006 (0.193089) | 0.442988 / 0.000490 (0.442498) | 0.003644 / 0.000200 (0.003444) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027492 / 0.037411 (-0.009919) | 0.108981 / 0.014526 (0.094455) | 0.117836 / 0.176557 (-0.058720) | 0.161220 / 0.737135 (-0.575915) | 0.124765 / 0.296338 (-0.171574) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413480 / 0.215209 (0.198271) | 4.111355 / 2.077655 (2.033700) | 1.933024 / 1.504120 (0.428904) | 1.727467 / 1.541195 (0.186272) | 1.827106 / 1.468490 (0.358616) | 0.688209 / 4.584777 (-3.896568) | 3.759672 / 3.745712 (0.013960) | 2.163806 / 5.269862 (-3.106056) | 1.473521 / 4.565676 (-3.092155) | 0.082859 / 0.424275 (-0.341416) | 0.012320 / 0.007607 (0.004713) | 0.515321 / 0.226044 (0.289277) | 5.158651 / 2.268929 (2.889722) | 2.489123 / 55.444624 (-52.955501) | 2.218910 / 6.876477 (-4.657566) | 2.257306 / 2.142072 (0.115233) | 0.861477 / 4.805227 (-3.943750) | 0.165857 / 6.500664 (-6.334807) | 0.063723 / 0.075469 (-0.011746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195163 / 1.841788 (-0.646625) | 14.954518 / 8.074308 (6.880210) | 14.272289 / 10.191392 (4.080897) | 0.167420 / 0.680424 (-0.513004) | 0.028907 / 0.534201 (-0.505294) | 0.450117 / 0.579283 (-0.129166) | 0.448532 / 0.434364 (0.014168) | 0.534406 / 0.540337 (-0.005931) | 0.633468 / 1.386936 (-0.753468) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003694) | 0.005266 / 0.011008 (-0.005742) | 0.075293 / 0.038508 (0.036785) | 0.034442 / 0.023109 (0.011333) | 0.346558 / 0.275898 (0.070660) | 0.391496 / 0.323480 (0.068017) | 0.005852 / 0.007986 (-0.002133) | 0.004121 / 0.004328 (-0.000207) | 0.074254 / 0.004250 (0.070004) | 0.048361 / 0.037052 (0.011309) | 0.344613 / 0.258489 (0.086124) | 0.401497 / 0.293841 (0.107656) | 0.037243 / 0.128546 (-0.091303) | 0.012505 / 0.075646 (-0.063142) | 0.087188 / 0.419271 (-0.332084) | 0.050114 / 0.043533 (0.006581) | 0.340454 / 0.255139 (0.085315) | 0.361087 / 0.283200 (0.077887) | 0.104692 / 0.141683 (-0.036991) | 1.419432 / 1.452155 (-0.032722) | 1.524709 / 1.492716 (0.031993) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231820 / 0.018006 (0.213814) | 0.445791 / 0.000490 (0.445301) | 0.000442 / 0.000200 (0.000242) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030445 / 0.037411 (-0.006967) | 0.111183 / 0.014526 (0.096657) | 0.123494 / 0.176557 (-0.053063) | 0.173121 / 0.737135 (-0.564014) | 0.124968 / 0.296338 (-0.171371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428854 / 0.215209 (0.213645) | 4.270262 / 2.077655 (2.192608) | 2.012075 / 1.504120 (0.507955) | 1.826564 / 1.541195 (0.285370) | 1.931699 / 1.468490 (0.463209) | 0.728762 / 4.584777 (-3.856015) | 3.879640 / 3.745712 (0.133928) | 3.325715 / 5.269862 (-1.944147) | 1.818573 / 4.565676 (-2.747104) | 0.087879 / 0.424275 (-0.336396) | 0.012530 / 0.007607 (0.004923) | 0.530249 / 0.226044 (0.304204) | 5.286110 / 2.268929 (3.017181) | 2.566649 / 55.444624 (-52.877975) | 2.210162 / 6.876477 (-4.666315) | 2.297562 / 2.142072 (0.155490) | 0.906161 / 4.805227 (-3.899066) | 0.171914 / 6.500664 (-6.328750) | 0.064182 / 0.075469 (-0.011287) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285781 / 1.841788 (-0.556006) | 16.159072 / 8.074308 (8.084763) | 14.087492 / 10.191392 (3.896100) | 0.148789 / 0.680424 (-0.531635) | 0.018078 / 0.534201 (-0.516123) | 0.427748 / 0.579283 (-0.151535) | 0.447079 / 0.434364 (0.012715) | 0.535917 / 0.540337 (-0.004421) | 0.627491 / 1.386936 (-0.759445) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88fa043d08c12923709c0492e037130c99c029fb \"CML watermark\")\n" ]
1,677,609,757,000
1,677,612,393,000
1,677,611,955,000
MEMBER
null
Fixes #5581 to use the correct output for the `set_format` method.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5592/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5592", "html_url": "https://github.com/huggingface/datasets/pull/5592", "diff_url": "https://github.com/huggingface/datasets/pull/5592.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5592.patch", "merged_at": "2023-02-28T19:19:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/5591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5591/comments
https://api.github.com/repos/huggingface/datasets/issues/5591/events
https://github.com/huggingface/datasets/pull/5591
1,603,571,407
PR_kwDODunzps5K9S79
5,591
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5591). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008826 / 0.011353 (-0.002527) | 0.004595 / 0.011008 (-0.006413) | 0.103387 / 0.038508 (0.064879) | 0.030241 / 0.023109 (0.007132) | 0.351202 / 0.275898 (0.075303) | 0.417601 / 0.323480 (0.094121) | 0.007121 / 0.007986 (-0.000865) | 0.003497 / 0.004328 (-0.000831) | 0.079256 / 0.004250 (0.075006) | 0.037617 / 0.037052 (0.000564) | 0.380542 / 0.258489 (0.122053) | 0.397863 / 0.293841 (0.104022) | 0.034291 / 0.128546 (-0.094255) | 0.011767 / 0.075646 (-0.063879) | 0.323737 / 0.419271 (-0.095534) | 0.041502 / 0.043533 (-0.002031) | 0.352982 / 0.255139 (0.097843) | 0.378618 / 0.283200 (0.095418) | 0.091671 / 0.141683 (-0.050012) | 1.499278 / 1.452155 (0.047123) | 1.517489 / 1.492716 (0.024773) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190108 / 0.018006 (0.172102) | 0.414404 / 0.000490 (0.413915) | 0.001064 / 0.000200 (0.000864) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023214 / 0.037411 (-0.014198) | 0.099351 / 0.014526 (0.084825) | 0.105227 / 0.176557 (-0.071330) | 0.150620 / 0.737135 (-0.586516) | 0.109323 / 0.296338 (-0.187015) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412463 / 0.215209 (0.197254) | 4.138123 / 2.077655 (2.060469) | 1.845163 / 1.504120 (0.341043) | 1.641108 / 1.541195 (0.099913) | 1.715471 / 1.468490 (0.246981) | 0.697397 / 4.584777 (-3.887380) | 3.449829 / 3.745712 (-0.295883) | 1.959309 / 5.269862 (-3.310553) | 1.285754 / 4.565676 (-3.279923) | 0.082746 / 0.424275 (-0.341529) | 0.012523 / 0.007607 (0.004916) | 0.524745 / 0.226044 (0.298700) | 5.257085 / 2.268929 (2.988156) | 2.293163 / 55.444624 (-53.151461) | 1.958309 / 6.876477 (-4.918168) | 2.016106 / 2.142072 (-0.125966) | 0.814359 / 4.805227 (-3.990869) | 0.149443 / 6.500664 (-6.351221) | 0.066013 / 0.075469 (-0.009456) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248495 / 1.841788 (-0.593292) | 14.303301 / 8.074308 (6.228993) | 14.238533 / 10.191392 (4.047141) | 0.161421 / 0.680424 (-0.519003) | 0.028779 / 0.534201 (-0.505422) | 0.396511 / 0.579283 (-0.182772) | 0.412784 / 0.434364 (-0.021580) | 0.473984 / 0.540337 (-0.066353) | 0.569610 / 1.386936 (-0.817327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007003 / 0.011353 (-0.004350) | 0.004621 / 0.011008 (-0.006387) | 0.079418 / 0.038508 (0.040910) | 0.028659 / 0.023109 (0.005550) | 0.340594 / 0.275898 (0.064696) | 0.377972 / 0.323480 (0.054492) | 0.005421 / 0.007986 (-0.002565) | 0.004852 / 0.004328 (0.000523) | 0.077579 / 0.004250 (0.073329) | 0.042662 / 0.037052 (0.005610) | 0.342264 / 0.258489 (0.083775) | 0.387255 / 0.293841 (0.093414) | 0.032574 / 0.128546 (-0.095972) | 0.011820 / 0.075646 (-0.063826) | 0.087960 / 0.419271 (-0.331312) | 0.045199 / 0.043533 (0.001667) | 0.341785 / 0.255139 (0.086646) | 0.365014 / 0.283200 (0.081814) | 0.096129 / 0.141683 (-0.045554) | 1.498962 / 1.452155 (0.046807) | 1.557331 / 1.492716 (0.064615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236216 / 0.018006 (0.218210) | 0.440189 / 0.000490 (0.439699) | 0.000399 / 0.000200 (0.000199) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026357 / 0.037411 (-0.011055) | 0.104485 / 0.014526 (0.089959) | 0.109616 / 0.176557 (-0.066941) | 0.163005 / 0.737135 (-0.574130) | 0.113859 / 0.296338 (-0.182479) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437452 / 0.215209 (0.222243) | 4.371854 / 2.077655 (2.294199) | 2.056845 / 1.504120 (0.552725) | 1.856071 / 1.541195 (0.314876) | 1.957978 / 1.468490 (0.489488) | 0.703171 / 4.584777 (-3.881606) | 3.433889 / 3.745712 (-0.311823) | 1.968321 / 5.269862 (-3.301541) | 1.204947 / 4.565676 (-3.360729) | 0.084499 / 0.424275 (-0.339777) | 0.012729 / 0.007607 (0.005122) | 0.537534 / 0.226044 (0.311490) | 5.383346 / 2.268929 (3.114417) | 2.522136 / 55.444624 (-52.922488) | 2.192715 / 6.876477 (-4.683762) | 2.243579 / 2.142072 (0.101507) | 0.811136 / 4.805227 (-3.994091) | 0.154015 / 6.500664 (-6.346649) | 0.069324 / 0.075469 (-0.006145) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294232 / 1.841788 (-0.547556) | 14.809448 / 8.074308 (6.735140) | 13.510074 / 10.191392 (3.318682) | 0.158033 / 0.680424 (-0.522391) | 0.016703 / 0.534201 (-0.517498) | 0.393976 / 0.579283 (-0.185307) | 0.385983 / 0.434364 (-0.048381) | 0.476691 / 0.540337 (-0.063646) | 0.565694 / 1.386936 (-0.821242) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b0dd3126196e8fcd9ba81a6602b46623b4e77e6e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009155 / 0.011353 (-0.002198) | 0.005227 / 0.011008 (-0.005781) | 0.099767 / 0.038508 (0.061259) | 0.035338 / 0.023109 (0.012229) | 0.293913 / 0.275898 (0.018015) | 0.366976 / 0.323480 (0.043496) | 0.007802 / 0.007986 (-0.000184) | 0.005286 / 0.004328 (0.000958) | 0.075117 / 0.004250 (0.070867) | 0.042336 / 0.037052 (0.005284) | 0.304690 / 0.258489 (0.046201) | 0.343496 / 0.293841 (0.049655) | 0.038745 / 0.128546 (-0.089802) | 0.012275 / 0.075646 (-0.063371) | 0.334455 / 0.419271 (-0.084817) | 0.052611 / 0.043533 (0.009078) | 0.293229 / 0.255139 (0.038090) | 0.314340 / 0.283200 (0.031140) | 0.108676 / 0.141683 (-0.033007) | 1.444495 / 1.452155 (-0.007659) | 1.492244 / 1.492716 (-0.000472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204852 / 0.018006 (0.186846) | 0.438202 / 0.000490 (0.437712) | 0.005043 / 0.000200 (0.004843) | 0.000282 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027268 / 0.037411 (-0.010143) | 0.109497 / 0.014526 (0.094972) | 0.117187 / 0.176557 (-0.059369) | 0.162551 / 0.737135 (-0.574584) | 0.124175 / 0.296338 (-0.172164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401667 / 0.215209 (0.186458) | 4.010274 / 2.077655 (1.932619) | 1.882617 / 1.504120 (0.378497) | 1.721960 / 1.541195 (0.180765) | 1.806874 / 1.468490 (0.338384) | 0.711253 / 4.584777 (-3.873524) | 3.806585 / 3.745712 (0.060873) | 3.713011 / 5.269862 (-1.556851) | 1.896558 / 4.565676 (-2.669119) | 0.086092 / 0.424275 (-0.338184) | 0.012129 / 0.007607 (0.004522) | 0.504905 / 0.226044 (0.278861) | 5.050794 / 2.268929 (2.781865) | 2.324331 / 55.444624 (-53.120293) | 2.020170 / 6.876477 (-4.856307) | 2.079685 / 2.142072 (-0.062388) | 0.854782 / 4.805227 (-3.950445) | 0.166754 / 6.500664 (-6.333910) | 0.062434 / 0.075469 (-0.013035) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187897 / 1.841788 (-0.653891) | 14.618517 / 8.074308 (6.544209) | 13.205760 / 10.191392 (3.014368) | 0.154322 / 0.680424 (-0.526102) | 0.029243 / 0.534201 (-0.504958) | 0.442390 / 0.579283 (-0.136893) | 0.434651 / 0.434364 (0.000287) | 0.523082 / 0.540337 (-0.017256) | 0.602675 / 1.386936 (-0.784261) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007214 / 0.011353 (-0.004139) | 0.005225 / 0.011008 (-0.005783) | 0.076497 / 0.038508 (0.037989) | 0.032761 / 0.023109 (0.009652) | 0.336005 / 0.275898 (0.060107) | 0.373547 / 0.323480 (0.050067) | 0.005460 / 0.007986 (-0.002526) | 0.003933 / 0.004328 (-0.000395) | 0.074540 / 0.004250 (0.070289) | 0.047785 / 0.037052 (0.010733) | 0.341917 / 0.258489 (0.083428) | 0.396978 / 0.293841 (0.103137) | 0.036763 / 0.128546 (-0.091783) | 0.012043 / 0.075646 (-0.063603) | 0.087632 / 0.419271 (-0.331640) | 0.049376 / 0.043533 (0.005843) | 0.335169 / 0.255139 (0.080030) | 0.354852 / 0.283200 (0.071652) | 0.100180 / 0.141683 (-0.041503) | 1.443422 / 1.452155 (-0.008733) | 1.518618 / 1.492716 (0.025901) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209593 / 0.018006 (0.191587) | 0.444028 / 0.000490 (0.443538) | 0.004545 / 0.000200 (0.004345) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029676 / 0.037411 (-0.007735) | 0.115444 / 0.014526 (0.100918) | 0.121765 / 0.176557 (-0.054791) | 0.171037 / 0.737135 (-0.566098) | 0.128592 / 0.296338 (-0.167746) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428556 / 0.215209 (0.213347) | 4.228531 / 2.077655 (2.150877) | 2.039190 / 1.504120 (0.535070) | 1.836518 / 1.541195 (0.295324) | 1.897040 / 1.468490 (0.428550) | 0.698893 / 4.584777 (-3.885884) | 3.753998 / 3.745712 (0.008286) | 2.097731 / 5.269862 (-3.172131) | 1.338315 / 4.565676 (-3.227361) | 0.087119 / 0.424275 (-0.337156) | 0.012149 / 0.007607 (0.004542) | 0.520774 / 0.226044 (0.294730) | 5.227420 / 2.268929 (2.958492) | 2.522235 / 55.444624 (-52.922389) | 2.194213 / 6.876477 (-4.682264) | 2.241406 / 2.142072 (0.099333) | 0.843119 / 4.805227 (-3.962109) | 0.169128 / 6.500664 (-6.331536) | 0.065071 / 0.075469 (-0.010398) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254490 / 1.841788 (-0.587298) | 15.037137 / 8.074308 (6.962829) | 13.115333 / 10.191392 (2.923941) | 0.181743 / 0.680424 (-0.498681) | 0.017748 / 0.534201 (-0.516453) | 0.425758 / 0.579283 (-0.153525) | 0.429926 / 0.434364 (-0.004438) | 0.524386 / 0.540337 (-0.015951) | 0.643044 / 1.386936 (-0.743892) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09e820e79a3b879855b514e2a62d84b738013940 \"CML watermark\")\n" ]
1,677,607,745,000
1,677,608,191,000
1,677,607,755,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5591/timeline
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5591", "html_url": "https://github.com/huggingface/datasets/pull/5591", "diff_url": "https://github.com/huggingface/datasets/pull/5591.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5591.patch", "merged_at": "2023-02-28T18:09:15" }
true