url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.79B
| node_id
stringlengths 18
32
| number
int64 1
6.01k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | comments
sequence | created_at
int64 1,587B
1,689B
| updated_at
int64 1,588B
1,689B
| closed_at
int64 1,587B
1,689B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6008/comments | https://api.github.com/repos/huggingface/datasets/issues/6008/events | https://github.com/huggingface/datasets/issues/6008 | 1,789,869,344 | I_kwDODunzps5qrz0g | 6,008 | Dataset.from_generator consistently freezes at ~1000 rows | {
"login": "andreemic",
"id": 27695722,
"node_id": "MDQ6VXNlcjI3Njk1NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/27695722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreemic",
"html_url": "https://github.com/andreemic",
"followers_url": "https://api.github.com/users/andreemic/followers",
"following_url": "https://api.github.com/users/andreemic/following{/other_user}",
"gists_url": "https://api.github.com/users/andreemic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreemic/subscriptions",
"organizations_url": "https://api.github.com/users/andreemic/orgs",
"repos_url": "https://api.github.com/users/andreemic/repos",
"events_url": "https://api.github.com/users/andreemic/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreemic/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n\r\nOur support for multi-dim arrays could be better, and we plan to improve it as part of https://github.com/huggingface/datasets/issues/5272.",
"> By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n> \r\n> Our support for multi-dim arrays could be better, and we plan to improve it as part of #5272.\r\n\r\nThanks for the explanation! The Image array was just for demonstration, I use PIL Images in practice. Does that make a difference? What's the best approach for a dataset with PIL Images as rows?"
] | 1,688,573,208,000 | 1,688,593,474,000 | null | NONE | null | ### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.
I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.
Let me know if you have ideas how to resolve it!
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
def gen():
for row in range(10000):
yield {"i": np.random.rand(512, 512, 3)}
Dataset.from_generator(gen)
# -> 90% of the time gets stuck around 1000 rows
```
### Expected behavior
Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 12.0.1
- Pandas version: 1.5.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6008/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6007/comments | https://api.github.com/repos/huggingface/datasets/issues/6007/events | https://github.com/huggingface/datasets/issues/6007 | 1,789,782,693 | I_kwDODunzps5qreql | 6,007 | Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This error means that one of the int32 (`Value(\"int32\")`) columns in the dataset has a value that is out of the valid (int32) range.\r\n\r\nI'll open a PR to print the name of a problematic column to make debugging such errors easier."
] | 1,688,570,210,000 | 1,688,584,517,000 | null | CONTRIBUTOR | null | ### Describe the bug
When load a large dataset with the following code
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
```
We encountered the error: "OverflowError: Python int too large to convert to C long"
The error look something like:
```
OverflowError: Python int too large to convert to C long
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
<ipython-input-7-0ed8700e662d> in <module>
----> 1 dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', cache_dir='/sfs/MNBVC/.cache/')
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
-> 1751 use_auth_token=use_auth_token,
1752 )
1753
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
703 if not downloaded_from_gcs:
704 self._download_and_prepare(
--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
791 try:
792 # Prepare split will record examples associated to the split
--> 793 self._prepare_split(split_generator, **prepare_split_kwargs)
794 except OSError as e:
795 raise OSError(
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
1219 writer.write(example, key)
1220 finally:
-> 1221 num_examples, num_bytes = writer.finalize()
1222
1223 split_generator.split_info.num_examples = num_examples
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in finalize(self, close_stream)
536 # Re-intializing to empty list for next batch
537 self.hkey_record = []
--> 538 self.write_examples_on_file()
539 if self.pa_writer is None:
540 if self.schema:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
407 # Since current_examples contains (example, key) tuples
408 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 409 self.write_batch(batch_examples=batch_examples)
410 self.current_examples = []
411
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
506 col_try_type = try_features[col] if try_features is not None and col in try_features else None
507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 508 arrays.append(pa.array(typed_sequence))
509 inferred_features[col] = typed_sequence.get_inferred_type()
510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
180 else:
181 trying_cast_to_python_objects = True
--> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
183 # use smaller integer precisions if possible
184 if self.trying_int_optimization:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
OverflowError: Python int too large to convert to C long
```
However, that dataset can be loaded in a streaming manner:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', streaming=True)
for i in dataset:
pass # it work well
```
Another issue is reported in our dataset hub:
https://huggingface.co/datasets/liwu/MNBVC/discussions/2
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
### Expected behavior
the dataset can be safely loaded
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9
- Python version: 3.6.8
- PyArrow version: 6.0.1
- Pandas version: 1.1.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6007/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6006/comments | https://api.github.com/repos/huggingface/datasets/issues/6006/events | https://github.com/huggingface/datasets/issues/6006 | 1,788,855,582 | I_kwDODunzps5qn8Ue | 6,006 | NotADirectoryError when loading gigawords | {
"login": "xipq",
"id": 115634163,
"node_id": "U_kgDOBuRv8w",
"avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xipq",
"html_url": "https://github.com/xipq",
"followers_url": "https://api.github.com/users/xipq/followers",
"following_url": "https://api.github.com/users/xipq/following{/other_user}",
"gists_url": "https://api.github.com/users/xipq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xipq/subscriptions",
"organizations_url": "https://api.github.com/users/xipq/orgs",
"repos_url": "https://api.github.com/users/xipq/repos",
"events_url": "https://api.github.com/users/xipq/events{/privacy}",
"received_events_url": "https://api.github.com/users/xipq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."
] | 1,688,538,221,000 | 1,688,538,662,000 | 1,688,538,661,000 | NONE | null | ### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): [0/1862]
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single
for key, record in generator:
File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b
64efb424b6/gigaword.py", line 115, in _generate_examples
with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s:
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper
return function(*args, use_auth_token=use_auth_token, **kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope
n
return open(main_hop, mode, *args, **kwargs)
NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e
89780c4be7599794a4f559048ec/org_data/train.src.txt'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "gigaword.py", line 38, in <module>
main()
File "gigaword.py", line 35, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data
dataset = self.load_dataset()
File "gigaword.py", line 29, in load_dataset
return datasets.load_dataset('gigaword')
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Download and process the dataset successfully
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6006/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6005/comments | https://api.github.com/repos/huggingface/datasets/issues/6005/events | https://github.com/huggingface/datasets/pull/6005 | 1,788,103,576 | PR_kwDODunzps5UoJ91 | 6,005 | Drop Python 3.7 support | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006152 / 0.011353 (-0.005200) | 0.003916 / 0.011008 (-0.007092) | 0.097355 / 0.038508 (0.058847) | 0.037228 / 0.023109 (0.014119) | 0.315753 / 0.275898 (0.039855) | 0.387949 / 0.323480 (0.064470) | 0.004804 / 0.007986 (-0.003181) | 0.002975 / 0.004328 (-0.001353) | 0.076932 / 0.004250 (0.072682) | 0.053497 / 0.037052 (0.016445) | 0.331143 / 0.258489 (0.072654) | 0.388347 / 0.293841 (0.094506) | 0.027535 / 0.128546 (-0.101011) | 0.008509 / 0.075646 (-0.067137) | 0.312639 / 0.419271 (-0.106632) | 0.047212 / 0.043533 (0.003679) | 0.316875 / 0.255139 (0.061736) | 0.352191 / 0.283200 (0.068992) | 0.021380 / 0.141683 (-0.120303) | 1.541401 / 1.452155 (0.089247) | 1.519420 / 1.492716 (0.026704) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206332 / 0.018006 (0.188326) | 0.412252 / 0.000490 (0.411762) | 0.005119 / 0.000200 (0.004919) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023856 / 0.037411 (-0.013556) | 0.098216 / 0.014526 (0.083691) | 0.106553 / 0.176557 (-0.070003) | 0.168767 / 0.737135 (-0.568369) | 0.109244 / 0.296338 (-0.187094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457580 / 0.215209 (0.242371) | 4.583246 / 2.077655 (2.505591) | 2.296356 / 1.504120 (0.792236) | 2.096216 / 1.541195 (0.555021) | 2.159086 / 1.468490 (0.690596) | 0.557905 / 4.584777 (-4.026872) | 3.345910 / 3.745712 (-0.399802) | 1.767436 / 5.269862 (-3.502426) | 1.021583 / 4.565676 (-3.544094) | 0.067265 / 0.424275 (-0.357011) | 0.011411 / 0.007607 (0.003804) | 0.559841 / 0.226044 (0.333797) | 5.586892 / 2.268929 (3.317963) | 2.735520 / 55.444624 (-52.709104) | 2.429393 / 6.876477 (-4.447084) | 2.544901 / 2.142072 (0.402829) | 0.667603 / 4.805227 (-4.137625) | 0.136244 / 6.500664 (-6.364421) | 0.066961 / 0.075469 (-0.008508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206529 / 1.841788 (-0.635259) | 13.988306 / 8.074308 (5.913998) | 13.481813 / 10.191392 (3.290421) | 0.161901 / 0.680424 (-0.518523) | 0.016850 / 0.534201 (-0.517351) | 0.367657 / 0.579283 (-0.211626) | 0.393343 / 0.434364 (-0.041021) | 0.465288 / 0.540337 (-0.075050) | 0.559888 / 1.386936 (-0.827048) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005956 / 0.011353 (-0.005397) | 0.003734 / 0.011008 (-0.007274) | 0.077841 / 0.038508 (0.039333) | 0.036532 / 0.023109 (0.013422) | 0.438923 / 0.275898 (0.163025) | 0.490133 / 0.323480 (0.166653) | 0.004651 / 0.007986 (-0.003335) | 0.002881 / 0.004328 (-0.001448) | 0.077868 / 0.004250 (0.073618) | 0.051700 / 0.037052 (0.014647) | 0.448018 / 0.258489 (0.189529) | 0.500304 / 0.293841 (0.206464) | 0.029051 / 0.128546 (-0.099496) | 0.008498 / 0.075646 (-0.067148) | 0.082932 / 0.419271 (-0.336339) | 0.043665 / 0.043533 (0.000132) | 0.431613 / 0.255139 (0.176474) | 0.458749 / 0.283200 (0.175549) | 0.021951 / 0.141683 (-0.119731) | 1.556043 / 1.452155 (0.103888) | 1.588391 / 1.492716 (0.095675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220674 / 0.018006 (0.202667) | 0.415408 / 0.000490 (0.414918) | 0.002613 / 0.000200 (0.002413) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025548 / 0.037411 (-0.011863) | 0.103633 / 0.014526 (0.089107) | 0.115193 / 0.176557 (-0.061364) | 0.163971 / 0.737135 (-0.573164) | 0.114754 / 0.296338 (-0.181585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456823 / 0.215209 (0.241614) | 4.569950 / 2.077655 (2.492296) | 2.196339 / 1.504120 (0.692219) | 1.985822 / 1.541195 (0.444628) | 2.044083 / 1.468490 (0.575593) | 0.567919 / 4.584777 (-4.016858) | 3.397515 / 3.745712 (-0.348197) | 1.741087 / 5.269862 (-3.528775) | 1.041237 / 4.565676 (-3.524440) | 0.068963 / 0.424275 (-0.355313) | 0.011677 / 0.007607 (0.004070) | 0.565010 / 0.226044 (0.338966) | 5.625886 / 2.268929 (3.356957) | 2.670658 / 55.444624 (-52.773967) | 2.300279 / 6.876477 (-4.576198) | 2.392178 / 2.142072 (0.250106) | 0.680226 / 4.805227 (-4.125001) | 0.139119 / 6.500664 (-6.361545) | 0.067953 / 0.075469 (-0.007516) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303280 / 1.841788 (-0.538507) | 14.458686 / 8.074308 (6.384378) | 14.409369 / 10.191392 (4.217977) | 0.144581 / 0.680424 (-0.535843) | 0.016634 / 0.534201 (-0.517567) | 0.364607 / 0.579283 (-0.214676) | 0.394521 / 0.434364 (-0.039843) | 0.433417 / 0.540337 (-0.106921) | 0.527127 / 1.386936 (-0.859809) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#04a36f9546484dceadb84a133c1a460281d018f8 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6005). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006245 / 0.011353 (-0.005108) | 0.003871 / 0.011008 (-0.007138) | 0.098823 / 0.038508 (0.060315) | 0.039853 / 0.023109 (0.016744) | 0.314989 / 0.275898 (0.039091) | 0.376733 / 0.323480 (0.053254) | 0.004754 / 0.007986 (-0.003232) | 0.002971 / 0.004328 (-0.001357) | 0.078451 / 0.004250 (0.074201) | 0.053160 / 0.037052 (0.016107) | 0.324443 / 0.258489 (0.065954) | 0.361488 / 0.293841 (0.067647) | 0.027942 / 0.128546 (-0.100604) | 0.008535 / 0.075646 (-0.067111) | 0.315526 / 0.419271 (-0.103745) | 0.045706 / 0.043533 (0.002174) | 0.329614 / 0.255139 (0.074475) | 0.336339 / 0.283200 (0.053139) | 0.021278 / 0.141683 (-0.120405) | 1.529710 / 1.452155 (0.077555) | 1.566833 / 1.492716 (0.074116) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215263 / 0.018006 (0.197257) | 0.440320 / 0.000490 (0.439830) | 0.002627 / 0.000200 (0.002427) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023971 / 0.037411 (-0.013441) | 0.100549 / 0.014526 (0.086023) | 0.106995 / 0.176557 (-0.069561) | 0.169630 / 0.737135 (-0.567505) | 0.111614 / 0.296338 (-0.184724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424911 / 0.215209 (0.209702) | 4.246920 / 2.077655 (2.169266) | 1.923321 / 1.504120 (0.419202) | 1.714795 / 1.541195 (0.173600) | 1.772906 / 1.468490 (0.304416) | 0.554676 / 4.584777 (-4.030101) | 3.478896 / 3.745712 (-0.266816) | 2.800494 / 5.269862 (-2.469368) | 1.382630 / 4.565676 (-3.183047) | 0.067271 / 0.424275 (-0.357004) | 0.010967 / 0.007607 (0.003360) | 0.526769 / 0.226044 (0.300725) | 5.288564 / 2.268929 (3.019636) | 2.337459 / 55.444624 (-53.107165) | 1.999975 / 6.876477 (-4.876502) | 2.102680 / 2.142072 (-0.039392) | 0.672181 / 4.805227 (-4.133046) | 0.135097 / 6.500664 (-6.365567) | 0.066950 / 0.075469 (-0.008519) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264365 / 1.841788 (-0.577423) | 14.282440 / 8.074308 (6.208132) | 14.220200 / 10.191392 (4.028808) | 0.139055 / 0.680424 (-0.541369) | 0.016681 / 0.534201 (-0.517520) | 0.367936 / 0.579283 (-0.211348) | 0.393959 / 0.434364 (-0.040404) | 0.424438 / 0.540337 (-0.115900) | 0.508065 / 1.386936 (-0.878872) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006514 / 0.011353 (-0.004839) | 0.003890 / 0.011008 (-0.007118) | 0.078871 / 0.038508 (0.040363) | 0.038080 / 0.023109 (0.014971) | 0.358282 / 0.275898 (0.082384) | 0.430654 / 0.323480 (0.107174) | 0.005712 / 0.007986 (-0.002273) | 0.003030 / 0.004328 (-0.001299) | 0.078636 / 0.004250 (0.074386) | 0.057771 / 0.037052 (0.020719) | 0.368814 / 0.258489 (0.110325) | 0.437047 / 0.293841 (0.143206) | 0.029470 / 0.128546 (-0.099076) | 0.008523 / 0.075646 (-0.067124) | 0.083334 / 0.419271 (-0.335938) | 0.044505 / 0.043533 (0.000972) | 0.357484 / 0.255139 (0.102345) | 0.393839 / 0.283200 (0.110639) | 0.023340 / 0.141683 (-0.118343) | 1.561033 / 1.452155 (0.108878) | 1.595560 / 1.492716 (0.102844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204149 / 0.018006 (0.186143) | 0.442747 / 0.000490 (0.442257) | 0.003105 / 0.000200 (0.002905) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027002 / 0.037411 (-0.010409) | 0.105595 / 0.014526 (0.091070) | 0.108695 / 0.176557 (-0.067861) | 0.163182 / 0.737135 (-0.573953) | 0.114999 / 0.296338 (-0.181339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483713 / 0.215209 (0.268504) | 4.836063 / 2.077655 (2.758409) | 2.488072 / 1.504120 (0.983952) | 2.289556 / 1.541195 (0.748361) | 2.342912 / 1.468490 (0.874422) | 0.565937 / 4.584777 (-4.018840) | 3.479085 / 3.745712 (-0.266627) | 1.770922 / 5.269862 (-3.498940) | 1.046084 / 4.565676 (-3.519592) | 0.067857 / 0.424275 (-0.356418) | 0.011283 / 0.007607 (0.003676) | 0.592966 / 0.226044 (0.366921) | 5.932842 / 2.268929 (3.663914) | 2.956252 / 55.444624 (-52.488372) | 2.602704 / 6.876477 (-4.273772) | 2.715625 / 2.142072 (0.573552) | 0.674299 / 4.805227 (-4.130929) | 0.136039 / 6.500664 (-6.364625) | 0.067629 / 0.075469 (-0.007840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333734 / 1.841788 (-0.508054) | 14.561943 / 8.074308 (6.487634) | 14.455385 / 10.191392 (4.263993) | 0.132020 / 0.680424 (-0.548404) | 0.016893 / 0.534201 (-0.517308) | 0.367146 / 0.579283 (-0.212137) | 0.399623 / 0.434364 (-0.034741) | 0.432658 / 0.540337 (-0.107680) | 0.530475 / 1.386936 (-0.856461) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#18da5adb22b2b403b8d8ae673192746d2ed7e9f9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006045 / 0.011353 (-0.005308) | 0.003906 / 0.011008 (-0.007103) | 0.097558 / 0.038508 (0.059050) | 0.038827 / 0.023109 (0.015718) | 0.393564 / 0.275898 (0.117666) | 0.442459 / 0.323480 (0.118980) | 0.004792 / 0.007986 (-0.003194) | 0.002984 / 0.004328 (-0.001345) | 0.076419 / 0.004250 (0.072169) | 0.053606 / 0.037052 (0.016554) | 0.409743 / 0.258489 (0.151254) | 0.445753 / 0.293841 (0.151912) | 0.027753 / 0.128546 (-0.100793) | 0.008428 / 0.075646 (-0.067219) | 0.310267 / 0.419271 (-0.109004) | 0.057582 / 0.043533 (0.014049) | 0.396624 / 0.255139 (0.141485) | 0.416288 / 0.283200 (0.133089) | 0.029048 / 0.141683 (-0.112635) | 1.495362 / 1.452155 (0.043207) | 1.546331 / 1.492716 (0.053615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203832 / 0.018006 (0.185826) | 0.423649 / 0.000490 (0.423160) | 0.004533 / 0.000200 (0.004333) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023084 / 0.037411 (-0.014328) | 0.100503 / 0.014526 (0.085977) | 0.105058 / 0.176557 (-0.071499) | 0.168506 / 0.737135 (-0.568629) | 0.112019 / 0.296338 (-0.184320) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425877 / 0.215209 (0.210668) | 4.251278 / 2.077655 (2.173624) | 1.931339 / 1.504120 (0.427219) | 1.730578 / 1.541195 (0.189383) | 1.750637 / 1.468490 (0.282147) | 0.559307 / 4.584777 (-4.025470) | 3.461665 / 3.745712 (-0.284047) | 2.826959 / 5.269862 (-2.442903) | 1.418448 / 4.565676 (-3.147229) | 0.067881 / 0.424275 (-0.356394) | 0.011394 / 0.007607 (0.003787) | 0.533226 / 0.226044 (0.307181) | 5.341849 / 2.268929 (3.072921) | 2.367832 / 55.444624 (-53.076792) | 2.027240 / 6.876477 (-4.849236) | 2.095852 / 2.142072 (-0.046220) | 0.673790 / 4.805227 (-4.131437) | 0.136044 / 6.500664 (-6.364620) | 0.066350 / 0.075469 (-0.009119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203740 / 1.841788 (-0.638048) | 13.720879 / 8.074308 (5.646571) | 13.405939 / 10.191392 (3.214547) | 0.146792 / 0.680424 (-0.533632) | 0.016844 / 0.534201 (-0.517357) | 0.373455 / 0.579283 (-0.205828) | 0.394596 / 0.434364 (-0.039768) | 0.464715 / 0.540337 (-0.075623) | 0.558931 / 1.386936 (-0.828005) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003817 / 0.011008 (-0.007191) | 0.077494 / 0.038508 (0.038985) | 0.037507 / 0.023109 (0.014398) | 0.387030 / 0.275898 (0.111132) | 0.437352 / 0.323480 (0.113872) | 0.004810 / 0.007986 (-0.003176) | 0.002935 / 0.004328 (-0.001394) | 0.077143 / 0.004250 (0.072892) | 0.053986 / 0.037052 (0.016933) | 0.393164 / 0.258489 (0.134675) | 0.449603 / 0.293841 (0.155762) | 0.029303 / 0.128546 (-0.099244) | 0.008481 / 0.075646 (-0.067165) | 0.083363 / 0.419271 (-0.335908) | 0.043877 / 0.043533 (0.000344) | 0.378175 / 0.255139 (0.123036) | 0.403996 / 0.283200 (0.120797) | 0.021688 / 0.141683 (-0.119995) | 1.541606 / 1.452155 (0.089452) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236759 / 0.018006 (0.218752) | 0.416221 / 0.000490 (0.415732) | 0.000862 / 0.000200 (0.000662) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025543 / 0.037411 (-0.011868) | 0.101731 / 0.014526 (0.087206) | 0.108482 / 0.176557 (-0.068075) | 0.160290 / 0.737135 (-0.576845) | 0.111392 / 0.296338 (-0.184946) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457767 / 0.215209 (0.242558) | 4.565976 / 2.077655 (2.488321) | 2.245413 / 1.504120 (0.741294) | 2.031458 / 1.541195 (0.490264) | 2.073193 / 1.468490 (0.604702) | 0.560461 / 4.584777 (-4.024316) | 3.422536 / 3.745712 (-0.323176) | 2.977017 / 5.269862 (-2.292845) | 1.377021 / 4.565676 (-3.188655) | 0.068444 / 0.424275 (-0.355831) | 0.011036 / 0.007607 (0.003429) | 0.571501 / 0.226044 (0.345456) | 5.702652 / 2.268929 (3.433723) | 2.727132 / 55.444624 (-52.717492) | 2.399269 / 6.876477 (-4.477208) | 2.574281 / 2.142072 (0.432208) | 0.682600 / 4.805227 (-4.122627) | 0.136943 / 6.500664 (-6.363722) | 0.067126 / 0.075469 (-0.008343) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322196 / 1.841788 (-0.519592) | 14.239509 / 8.074308 (6.165201) | 14.235779 / 10.191392 (4.044387) | 0.148262 / 0.680424 (-0.532162) | 0.016566 / 0.534201 (-0.517635) | 0.364034 / 0.579283 (-0.215249) | 0.399157 / 0.434364 (-0.035207) | 0.426348 / 0.540337 (-0.113990) | 0.520804 / 1.386936 (-0.866132) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f57aae06bd325d76cb70cb774450f3a66f169cf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007808 / 0.011353 (-0.003545) | 0.004706 / 0.011008 (-0.006303) | 0.100530 / 0.038508 (0.062022) | 0.052052 / 0.023109 (0.028943) | 0.419300 / 0.275898 (0.143402) | 0.488451 / 0.323480 (0.164971) | 0.006350 / 0.007986 (-0.001636) | 0.003875 / 0.004328 (-0.000453) | 0.076489 / 0.004250 (0.072238) | 0.077554 / 0.037052 (0.040502) | 0.435863 / 0.258489 (0.177373) | 0.483241 / 0.293841 (0.189400) | 0.037518 / 0.128546 (-0.091028) | 0.009857 / 0.075646 (-0.065789) | 0.340933 / 0.419271 (-0.078339) | 0.087046 / 0.043533 (0.043514) | 0.410721 / 0.255139 (0.155582) | 0.428995 / 0.283200 (0.145795) | 0.041701 / 0.141683 (-0.099982) | 1.821017 / 1.452155 (0.368862) | 1.837021 / 1.492716 (0.344305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228444 / 0.018006 (0.210438) | 0.480446 / 0.000490 (0.479956) | 0.004963 / 0.000200 (0.004763) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032485 / 0.037411 (-0.004926) | 0.096500 / 0.014526 (0.081974) | 0.111547 / 0.176557 (-0.065010) | 0.178842 / 0.737135 (-0.558294) | 0.111099 / 0.296338 (-0.185240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467159 / 0.215209 (0.251950) | 4.701676 / 2.077655 (2.624021) | 2.390560 / 1.504120 (0.886440) | 2.197722 / 1.541195 (0.656528) | 2.264705 / 1.468490 (0.796215) | 0.568667 / 4.584777 (-4.016110) | 4.200724 / 3.745712 (0.455012) | 3.777625 / 5.269862 (-1.492236) | 2.372451 / 4.565676 (-2.193225) | 0.067562 / 0.424275 (-0.356714) | 0.008947 / 0.007607 (0.001340) | 0.556910 / 0.226044 (0.330865) | 5.528927 / 2.268929 (3.259998) | 2.902780 / 55.444624 (-52.541844) | 2.507933 / 6.876477 (-4.368544) | 2.734627 / 2.142072 (0.592554) | 0.683305 / 4.805227 (-4.121922) | 0.158288 / 6.500664 (-6.342376) | 0.071252 / 0.075469 (-0.004217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.487502 / 1.841788 (-0.354286) | 22.193341 / 8.074308 (14.119033) | 15.922607 / 10.191392 (5.731215) | 0.172189 / 0.680424 (-0.508235) | 0.021502 / 0.534201 (-0.512699) | 0.471198 / 0.579283 (-0.108085) | 0.475979 / 0.434364 (0.041615) | 0.544675 / 0.540337 (0.004338) | 0.756102 / 1.386936 (-0.630834) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007635 / 0.011353 (-0.003717) | 0.004614 / 0.011008 (-0.006394) | 0.075852 / 0.038508 (0.037344) | 0.049700 / 0.023109 (0.026591) | 0.425957 / 0.275898 (0.150059) | 0.512590 / 0.323480 (0.189110) | 0.006921 / 0.007986 (-0.001065) | 0.003714 / 0.004328 (-0.000615) | 0.075536 / 0.004250 (0.071286) | 0.070206 / 0.037052 (0.033153) | 0.455706 / 0.258489 (0.197217) | 0.512231 / 0.293841 (0.218390) | 0.036685 / 0.128546 (-0.091861) | 0.009793 / 0.075646 (-0.065853) | 0.084208 / 0.419271 (-0.335064) | 0.065262 / 0.043533 (0.021729) | 0.423761 / 0.255139 (0.168622) | 0.456791 / 0.283200 (0.173591) | 0.044539 / 0.141683 (-0.097144) | 1.797029 / 1.452155 (0.344874) | 1.864124 / 1.492716 (0.371408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.366840 / 0.018006 (0.348834) | 0.479254 / 0.000490 (0.478765) | 0.070383 / 0.000200 (0.070183) | 0.000762 / 0.000054 (0.000707) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.103140 / 0.014526 (0.088614) | 0.117099 / 0.176557 (-0.059457) | 0.178532 / 0.737135 (-0.558603) | 0.120092 / 0.296338 (-0.176247) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492993 / 0.215209 (0.277784) | 4.878776 / 2.077655 (2.801121) | 2.566666 / 1.504120 (1.062547) | 2.356383 / 1.541195 (0.815188) | 2.454723 / 1.468490 (0.986233) | 0.571432 / 4.584777 (-4.013345) | 4.240554 / 3.745712 (0.494842) | 7.509259 / 5.269862 (2.239398) | 4.040294 / 4.565676 (-0.525382) | 0.067409 / 0.424275 (-0.356866) | 0.008657 / 0.007607 (0.001050) | 0.585751 / 0.226044 (0.359707) | 5.967668 / 2.268929 (3.698739) | 3.195573 / 55.444624 (-52.249052) | 2.839772 / 6.876477 (-4.036704) | 2.806319 / 2.142072 (0.664246) | 0.681502 / 4.805227 (-4.123725) | 0.158673 / 6.500664 (-6.341991) | 0.073224 / 0.075469 (-0.002245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623335 / 1.841788 (-0.218453) | 22.490806 / 8.074308 (14.416498) | 16.762435 / 10.191392 (6.571043) | 0.180961 / 0.680424 (-0.499463) | 0.022716 / 0.534201 (-0.511485) | 0.472910 / 0.579283 (-0.106373) | 0.471616 / 0.434364 (0.037252) | 0.548192 / 0.540337 (0.007854) | 0.734357 / 1.386936 (-0.652579) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c0498b47a00153d4730352b6595fc51ab054fb95 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005858 / 0.011353 (-0.005495) | 0.003512 / 0.011008 (-0.007497) | 0.079739 / 0.038508 (0.041231) | 0.057736 / 0.023109 (0.034627) | 0.317640 / 0.275898 (0.041742) | 0.354157 / 0.323480 (0.030677) | 0.004772 / 0.007986 (-0.003214) | 0.002824 / 0.004328 (-0.001504) | 0.063288 / 0.004250 (0.059037) | 0.049542 / 0.037052 (0.012489) | 0.323974 / 0.258489 (0.065485) | 0.372149 / 0.293841 (0.078308) | 0.026841 / 0.128546 (-0.101705) | 0.007846 / 0.075646 (-0.067800) | 0.262546 / 0.419271 (-0.156725) | 0.051952 / 0.043533 (0.008420) | 0.319439 / 0.255139 (0.064300) | 0.343862 / 0.283200 (0.060663) | 0.027021 / 0.141683 (-0.114662) | 1.445211 / 1.452155 (-0.006944) | 1.485006 / 1.492716 (-0.007711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183174 / 0.018006 (0.165167) | 0.422794 / 0.000490 (0.422304) | 0.004148 / 0.000200 (0.003948) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023037 / 0.037411 (-0.014374) | 0.071300 / 0.014526 (0.056775) | 0.083022 / 0.176557 (-0.093535) | 0.146215 / 0.737135 (-0.590920) | 0.082549 / 0.296338 (-0.213789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422846 / 0.215209 (0.207637) | 4.215280 / 2.077655 (2.137626) | 2.256802 / 1.504120 (0.752682) | 2.056867 / 1.541195 (0.515673) | 2.102478 / 1.468490 (0.633988) | 0.497552 / 4.584777 (-4.087225) | 3.049716 / 3.745712 (-0.695996) | 4.209227 / 5.269862 (-1.060635) | 2.599947 / 4.565676 (-1.965730) | 0.059131 / 0.424275 (-0.365144) | 0.006459 / 0.007607 (-0.001148) | 0.495047 / 0.226044 (0.269003) | 4.952332 / 2.268929 (2.683404) | 2.675260 / 55.444624 (-52.769365) | 2.333223 / 6.876477 (-4.543254) | 2.449573 / 2.142072 (0.307500) | 0.583420 / 4.805227 (-4.221807) | 0.125140 / 6.500664 (-6.375524) | 0.060209 / 0.075469 (-0.015260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215033 / 1.841788 (-0.626755) | 18.101107 / 8.074308 (10.026799) | 13.489222 / 10.191392 (3.297830) | 0.147122 / 0.680424 (-0.533302) | 0.016567 / 0.534201 (-0.517634) | 0.329909 / 0.579283 (-0.249374) | 0.340952 / 0.434364 (-0.093412) | 0.379166 / 0.540337 (-0.161172) | 0.510767 / 1.386936 (-0.876169) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005942 / 0.011353 (-0.005411) | 0.003628 / 0.011008 (-0.007380) | 0.061975 / 0.038508 (0.023467) | 0.058331 / 0.023109 (0.035221) | 0.393277 / 0.275898 (0.117379) | 0.410740 / 0.323480 (0.087261) | 0.004546 / 0.007986 (-0.003440) | 0.002826 / 0.004328 (-0.001503) | 0.062216 / 0.004250 (0.057966) | 0.049801 / 0.037052 (0.012748) | 0.394070 / 0.258489 (0.135581) | 0.414407 / 0.293841 (0.120566) | 0.027161 / 0.128546 (-0.101385) | 0.007901 / 0.075646 (-0.067746) | 0.066778 / 0.419271 (-0.352493) | 0.041354 / 0.043533 (-0.002179) | 0.379432 / 0.255139 (0.124293) | 0.402966 / 0.283200 (0.119766) | 0.020279 / 0.141683 (-0.121404) | 1.416986 / 1.452155 (-0.035169) | 1.474335 / 1.492716 (-0.018382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226147 / 0.018006 (0.208140) | 0.404361 / 0.000490 (0.403871) | 0.000358 / 0.000200 (0.000158) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025105 / 0.037411 (-0.012306) | 0.075849 / 0.014526 (0.061323) | 0.084781 / 0.176557 (-0.091775) | 0.137415 / 0.737135 (-0.599720) | 0.086288 / 0.296338 (-0.210051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445925 / 0.215209 (0.230716) | 4.453478 / 2.077655 (2.375823) | 2.419048 / 1.504120 (0.914928) | 2.246363 / 1.541195 (0.705168) | 2.304022 / 1.468490 (0.835532) | 0.499132 / 4.584777 (-4.085645) | 3.001336 / 3.745712 (-0.744376) | 2.902593 / 5.269862 (-2.367269) | 1.819843 / 4.565676 (-2.745834) | 0.057210 / 0.424275 (-0.367065) | 0.006338 / 0.007607 (-0.001269) | 0.523280 / 0.226044 (0.297236) | 5.235969 / 2.268929 (2.967040) | 2.897585 / 55.444624 (-52.547039) | 2.541586 / 6.876477 (-4.334891) | 2.564233 / 2.142072 (0.422160) | 0.584714 / 4.805227 (-4.220513) | 0.124611 / 6.500664 (-6.376053) | 0.061774 / 0.075469 (-0.013695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.349799 / 1.841788 (-0.491988) | 18.225076 / 8.074308 (10.150768) | 13.781518 / 10.191392 (3.590126) | 0.130562 / 0.680424 (-0.549862) | 0.016434 / 0.534201 (-0.517767) | 0.331607 / 0.579283 (-0.247676) | 0.343456 / 0.434364 (-0.090908) | 0.380437 / 0.540337 (-0.159900) | 0.522793 / 1.386936 (-0.864143) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f0a3dbbd2e7ace162346d95ec27db674e80c1e23 \"CML watermark\")\n"
] | 1,688,482,957,000 | 1,688,573,751,000 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6005/timeline | null | null | 1 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6005",
"html_url": "https://github.com/huggingface/datasets/pull/6005",
"diff_url": "https://github.com/huggingface/datasets/pull/6005.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6005.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6004/comments | https://api.github.com/repos/huggingface/datasets/issues/6004/events | https://github.com/huggingface/datasets/pull/6004 | 1,786,636,368 | PR_kwDODunzps5UjN2h | 6,004 | Misc improvements | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006897 / 0.011353 (-0.004456) | 0.004207 / 0.011008 (-0.006802) | 0.104828 / 0.038508 (0.066320) | 0.048054 / 0.023109 (0.024945) | 0.373991 / 0.275898 (0.098093) | 0.426740 / 0.323480 (0.103260) | 0.005540 / 0.007986 (-0.002446) | 0.003531 / 0.004328 (-0.000797) | 0.079304 / 0.004250 (0.075053) | 0.066996 / 0.037052 (0.029944) | 0.370675 / 0.258489 (0.112186) | 0.414154 / 0.293841 (0.120313) | 0.031567 / 0.128546 (-0.096979) | 0.008843 / 0.075646 (-0.066803) | 0.357426 / 0.419271 (-0.061845) | 0.067040 / 0.043533 (0.023508) | 0.362384 / 0.255139 (0.107245) | 0.376056 / 0.283200 (0.092856) | 0.032985 / 0.141683 (-0.108697) | 1.560603 / 1.452155 (0.108448) | 1.619024 / 1.492716 (0.126308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229059 / 0.018006 (0.211053) | 0.440513 / 0.000490 (0.440023) | 0.004647 / 0.000200 (0.004447) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029517 / 0.037411 (-0.007894) | 0.120974 / 0.014526 (0.106448) | 0.125070 / 0.176557 (-0.051486) | 0.184695 / 0.737135 (-0.552441) | 0.130244 / 0.296338 (-0.166095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436930 / 0.215209 (0.221721) | 4.356118 / 2.077655 (2.278463) | 2.049169 / 1.504120 (0.545049) | 1.842898 / 1.541195 (0.301703) | 1.918948 / 1.468490 (0.450458) | 0.553573 / 4.584777 (-4.031204) | 3.883195 / 3.745712 (0.137483) | 3.209780 / 5.269862 (-2.060081) | 1.551707 / 4.565676 (-3.013970) | 0.068181 / 0.424275 (-0.356094) | 0.012370 / 0.007607 (0.004762) | 0.539899 / 0.226044 (0.313854) | 5.380008 / 2.268929 (3.111079) | 2.518178 / 55.444624 (-52.926446) | 2.174190 / 6.876477 (-4.702286) | 2.317812 / 2.142072 (0.175740) | 0.674154 / 4.805227 (-4.131073) | 0.149313 / 6.500664 (-6.351351) | 0.068297 / 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261426 / 1.841788 (-0.580362) | 15.316378 / 8.074308 (7.242070) | 13.573512 / 10.191392 (3.382120) | 0.190022 / 0.680424 (-0.490401) | 0.018697 / 0.534201 (-0.515504) | 0.448122 / 0.579283 (-0.131161) | 0.435044 / 0.434364 (0.000681) | 0.550065 / 0.540337 (0.009728) | 0.653547 / 1.386936 (-0.733389) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007116 / 0.011353 (-0.004237) | 0.004375 / 0.011008 (-0.006633) | 0.081793 / 0.038508 (0.043285) | 0.047980 / 0.023109 (0.024871) | 0.392185 / 0.275898 (0.116287) | 0.462263 / 0.323480 (0.138783) | 0.005574 / 0.007986 (-0.002412) | 0.003552 / 0.004328 (-0.000776) | 0.080413 / 0.004250 (0.076162) | 0.065539 / 0.037052 (0.028487) | 0.413137 / 0.258489 (0.154648) | 0.467377 / 0.293841 (0.173536) | 0.034386 / 0.128546 (-0.094160) | 0.009183 / 0.075646 (-0.066464) | 0.087542 / 0.419271 (-0.331730) | 0.053954 / 0.043533 (0.010421) | 0.385096 / 0.255139 (0.129957) | 0.404900 / 0.283200 (0.121701) | 0.025908 / 0.141683 (-0.115775) | 1.550159 / 1.452155 (0.098005) | 1.598794 / 1.492716 (0.106078) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246222 / 0.018006 (0.228216) | 0.441095 / 0.000490 (0.440605) | 0.006863 / 0.000200 (0.006663) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032179 / 0.037411 (-0.005233) | 0.120112 / 0.014526 (0.105586) | 0.129326 / 0.176557 (-0.047230) | 0.184542 / 0.737135 (-0.552593) | 0.135038 / 0.296338 (-0.161300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459002 / 0.215209 (0.243793) | 4.580258 / 2.077655 (2.502604) | 2.296689 / 1.504120 (0.792569) | 2.104338 / 1.541195 (0.563143) | 2.182896 / 1.468490 (0.714406) | 0.546447 / 4.584777 (-4.038330) | 3.854047 / 3.745712 (0.108335) | 1.873829 / 5.269862 (-3.396032) | 1.116484 / 4.565676 (-3.449193) | 0.067158 / 0.424275 (-0.357117) | 0.012035 / 0.007607 (0.004428) | 0.556642 / 0.226044 (0.330597) | 5.574436 / 2.268929 (3.305508) | 2.828223 / 55.444624 (-52.616402) | 2.519851 / 6.876477 (-4.356626) | 2.668594 / 2.142072 (0.526521) | 0.675989 / 4.805227 (-4.129238) | 0.146075 / 6.500664 (-6.354589) | 0.067788 / 0.075469 (-0.007681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345958 / 1.841788 (-0.495830) | 15.672748 / 8.074308 (7.598440) | 14.937583 / 10.191392 (4.746191) | 0.163479 / 0.680424 (-0.516945) | 0.018364 / 0.534201 (-0.515837) | 0.433296 / 0.579283 (-0.145987) | 0.432463 / 0.434364 (-0.001901) | 0.512000 / 0.540337 (-0.028338) | 0.619397 / 1.386936 (-0.767539) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0832d48a07ed00b406271f4b4439e6d54ae38ebf \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6004). All of your documentation changes will be reflected on that endpoint."
] | 1,688,408,954,000 | 1,688,409,261,000 | null | CONTRIBUTOR | null | Contains the following improvements:
* fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section
* updates `Makefile` to also run the style checks on `utils` and `setup.py`
* deletes a test for GH-hosted datasets (no longer supported)
* deletes `convert_dataset.sh` (outdated)
* aligns `utils/release.py` with `transformers` (the current version is outdated) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6004/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6004",
"html_url": "https://github.com/huggingface/datasets/pull/6004",
"diff_url": "https://github.com/huggingface/datasets/pull/6004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6004.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6003/comments | https://api.github.com/repos/huggingface/datasets/issues/6003/events | https://github.com/huggingface/datasets/issues/6003 | 1,786,554,110 | I_kwDODunzps5qfKb- | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | {
"login": "PonteIneptique",
"id": 1929830,
"node_id": "MDQ6VXNlcjE5Mjk4MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PonteIneptique",
"html_url": "https://github.com/PonteIneptique",
"followers_url": "https://api.github.com/users/PonteIneptique/followers",
"following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}",
"gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions",
"organizations_url": "https://api.github.com/users/PonteIneptique/orgs",
"repos_url": "https://api.github.com/users/PonteIneptique/repos",
"events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}",
"received_events_url": "https://api.github.com/users/PonteIneptique/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,688,404,531,000 | 1,688,404,531,000 | null | NONE | null | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
Everytime, on step 19, I get
```pytb
File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
```
I tried:
- training without interleave on dataset 1, it runs
- training without interleave on dataset 2, it runs
- training without `.to_iterable_dataset()`, it hangs then crash
- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.
I might have coded something wrong, but I don't get what
### Steps to reproduce the bug
I have this function:
```py
def build_dataset(path: str, percent: str):
dataset = load_dataset(
"text",
data_files={"train": [path]},
split=f"train[{percent}]"
)
dataset = dataset.map(
lambda examples: tokenize(examples["text"]),
batched=True,
num_proc=num_proc,
)
dataset = dataset.map(
group_texts,
batched=True,
num_proc=num_proc,
desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}",
remove_columns=["text"]
)
print(len(dataset))
return dataset.to_iterable_dataset()
```
I hardcoded group_text:
```py
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // 512) * 512
# Split by chunks of max_len.
result = {
k: [t[i: i + 512] for i in range(0, total_length, 512)]
for k, t in concatenated_examples.items()
}
# result = {k: [el for el in elements if el] for k, elements in result.items()}
return result
```
And then I build datasets using the following code:
```py
train1 = build_dataset("d1.txt", ":95%")
train2 = build_dataset("d2.txt", ":95%")
dev1 = build_dataset("d1.txt", "95%:")
dev2 = build_dataset("d2.txt", "95%:")
```
and finally I run
```py
train_dataset = interleave_datasets(
[train1, train2],
probabilities=[0.8, 0.2],
seed=42
)
eval_dataset = interleave_datasets(
[dev1, dev2],
probabilities=[0.8, 0.2],
seed=42
)
```
Then I run the training part which remains mostly untouched:
> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16
### Expected behavior
The model should then train normally, but fails every time at the same step (19).
printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]
### Environment info
transformers[torch] 4.30.2
Ubuntu
A100 0 CUDA 12
Driver Version: 525.116.04 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6003/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6002/comments | https://api.github.com/repos/huggingface/datasets/issues/6002/events | https://github.com/huggingface/datasets/pull/6002 | 1,786,053,060 | PR_kwDODunzps5UhP-Z | 6,002 | Add KLUE-MRC metrics | {
"login": "ingyuseong",
"id": 37537248,
"node_id": "MDQ6VXNlcjM3NTM3MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/37537248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ingyuseong",
"html_url": "https://github.com/ingyuseong",
"followers_url": "https://api.github.com/users/ingyuseong/followers",
"following_url": "https://api.github.com/users/ingyuseong/following{/other_user}",
"gists_url": "https://api.github.com/users/ingyuseong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ingyuseong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ingyuseong/subscriptions",
"organizations_url": "https://api.github.com/users/ingyuseong/orgs",
"repos_url": "https://api.github.com/users/ingyuseong/repos",
"events_url": "https://api.github.com/users/ingyuseong/events{/privacy}",
"received_events_url": "https://api.github.com/users/ingyuseong/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https://huggingface.co/docs/evaluate/creating_and_sharing)."
] | 1,688,386,270,000 | 1,688,398,457,000 | null | NONE | null | ## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension)
Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue).
KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.
Specifically, in the case of [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness), it leverages the scoring script of SQuAD to evaluate SQuAD 2.0 and KorQuAD. But the script isn't suitable for KLUE-MRC because KLUE-MRC is a bit different from SQuAD 2.0. And this is why I added the scoring script for KLUE-MRC.
- [x] All tests passed
- [x] Added a metric card (referred the metric card of SQuAD 2.0)
- [x] Compatibility test with [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) passed
### References
- [KLUE: Korean Language Understanding Evaluation](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/98dce83da57b0395e163467c9dae521b-Paper-round2.pdf)
- [KLUE on Hugging Face Datasets](https://huggingface.co/datasets/klue)
- #2416 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6002/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6002",
"html_url": "https://github.com/huggingface/datasets/pull/6002",
"diff_url": "https://github.com/huggingface/datasets/pull/6002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6002.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6001/comments | https://api.github.com/repos/huggingface/datasets/issues/6001/events | https://github.com/huggingface/datasets/pull/6001 | 1,782,516,627 | PR_kwDODunzps5UVMMh | 6,001 | Align `column_names` type check with type hint in `sort` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006038 / 0.011353 (-0.005315) | 0.003797 / 0.011008 (-0.007211) | 0.097686 / 0.038508 (0.059178) | 0.035235 / 0.023109 (0.012126) | 0.317294 / 0.275898 (0.041396) | 0.377682 / 0.323480 (0.054202) | 0.003485 / 0.007986 (-0.004501) | 0.003603 / 0.004328 (-0.000725) | 0.077268 / 0.004250 (0.073017) | 0.054649 / 0.037052 (0.017597) | 0.322293 / 0.258489 (0.063804) | 0.372277 / 0.293841 (0.078436) | 0.027927 / 0.128546 (-0.100619) | 0.008495 / 0.075646 (-0.067151) | 0.313078 / 0.419271 (-0.106193) | 0.046974 / 0.043533 (0.003441) | 0.313848 / 0.255139 (0.058709) | 0.338454 / 0.283200 (0.055255) | 0.020462 / 0.141683 (-0.121221) | 1.473027 / 1.452155 (0.020873) | 1.539468 / 1.492716 (0.046752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221429 / 0.018006 (0.203423) | 0.412044 / 0.000490 (0.411555) | 0.005866 / 0.000200 (0.005666) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022870 / 0.037411 (-0.014541) | 0.099129 / 0.014526 (0.084603) | 0.103463 / 0.176557 (-0.073094) | 0.164969 / 0.737135 (-0.572166) | 0.110000 / 0.296338 (-0.186339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431311 / 0.215209 (0.216102) | 4.293562 / 2.077655 (2.215907) | 1.961209 / 1.504120 (0.457089) | 1.733680 / 1.541195 (0.192485) | 1.793171 / 1.468490 (0.324681) | 0.568566 / 4.584777 (-4.016211) | 3.401794 / 3.745712 (-0.343918) | 1.827949 / 5.269862 (-3.441913) | 1.055963 / 4.565676 (-3.509714) | 0.068459 / 0.424275 (-0.355816) | 0.011586 / 0.007607 (0.003979) | 0.533936 / 0.226044 (0.307891) | 5.347637 / 2.268929 (3.078708) | 2.378056 / 55.444624 (-53.066569) | 2.032159 / 6.876477 (-4.844318) | 2.159064 / 2.142072 (0.016991) | 0.674528 / 4.805227 (-4.130699) | 0.136859 / 6.500664 (-6.363805) | 0.066629 / 0.075469 (-0.008840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218084 / 1.841788 (-0.623704) | 14.141710 / 8.074308 (6.067402) | 13.588415 / 10.191392 (3.397023) | 0.155104 / 0.680424 (-0.525320) | 0.017160 / 0.534201 (-0.517041) | 0.375558 / 0.579283 (-0.203725) | 0.386293 / 0.434364 (-0.048071) | 0.459476 / 0.540337 (-0.080862) | 0.548561 / 1.386936 (-0.838375) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005878 / 0.011353 (-0.005475) | 0.003750 / 0.011008 (-0.007259) | 0.077720 / 0.038508 (0.039212) | 0.034955 / 0.023109 (0.011846) | 0.357480 / 0.275898 (0.081582) | 0.418210 / 0.323480 (0.094730) | 0.004566 / 0.007986 (-0.003419) | 0.002918 / 0.004328 (-0.001410) | 0.076517 / 0.004250 (0.072266) | 0.050202 / 0.037052 (0.013150) | 0.368166 / 0.258489 (0.109677) | 0.415681 / 0.293841 (0.121840) | 0.029496 / 0.128546 (-0.099050) | 0.008547 / 0.075646 (-0.067099) | 0.083037 / 0.419271 (-0.336234) | 0.045001 / 0.043533 (0.001468) | 0.356503 / 0.255139 (0.101364) | 0.383747 / 0.283200 (0.100547) | 0.025071 / 0.141683 (-0.116612) | 1.541985 / 1.452155 (0.089830) | 1.594710 / 1.492716 (0.101994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204491 / 0.018006 (0.186484) | 0.408686 / 0.000490 (0.408196) | 0.002505 / 0.000200 (0.002305) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024446 / 0.037411 (-0.012965) | 0.101432 / 0.014526 (0.086906) | 0.108105 / 0.176557 (-0.068452) | 0.161195 / 0.737135 (-0.575940) | 0.112671 / 0.296338 (-0.183667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459697 / 0.215209 (0.244488) | 4.570071 / 2.077655 (2.492416) | 2.211547 / 1.504120 (0.707427) | 1.996651 / 1.541195 (0.455457) | 2.015621 / 1.468490 (0.547131) | 0.567423 / 4.584777 (-4.017354) | 3.408027 / 3.745712 (-0.337685) | 2.913824 / 5.269862 (-2.356038) | 1.423223 / 4.565676 (-3.142453) | 0.068740 / 0.424275 (-0.355535) | 0.010997 / 0.007607 (0.003390) | 0.567340 / 0.226044 (0.341296) | 5.666280 / 2.268929 (3.397351) | 2.804934 / 55.444624 (-52.639690) | 2.430761 / 6.876477 (-4.445716) | 2.451820 / 2.142072 (0.309748) | 0.681926 / 4.805227 (-4.123301) | 0.137761 / 6.500664 (-6.362903) | 0.067173 / 0.075469 (-0.008296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329853 / 1.841788 (-0.511934) | 14.436232 / 8.074308 (6.361924) | 14.398645 / 10.191392 (4.207253) | 0.147421 / 0.680424 (-0.533002) | 0.016743 / 0.534201 (-0.517458) | 0.364964 / 0.579283 (-0.214319) | 0.387072 / 0.434364 (-0.047292) | 0.423892 / 0.540337 (-0.116445) | 0.521304 / 1.386936 (-0.865632) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a62b6ce65f718e9ff4189da86d160ae4bb197fc2 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004889) | 0.003923 / 0.011008 (-0.007086) | 0.102096 / 0.038508 (0.063588) | 0.040230 / 0.023109 (0.017121) | 0.384688 / 0.275898 (0.108789) | 0.445574 / 0.323480 (0.122094) | 0.003590 / 0.007986 (-0.004395) | 0.004023 / 0.004328 (-0.000306) | 0.080125 / 0.004250 (0.075875) | 0.057406 / 0.037052 (0.020354) | 0.395049 / 0.258489 (0.136560) | 0.438065 / 0.293841 (0.144224) | 0.028963 / 0.128546 (-0.099583) | 0.008693 / 0.075646 (-0.066954) | 0.317158 / 0.419271 (-0.102114) | 0.047930 / 0.043533 (0.004397) | 0.382442 / 0.255139 (0.127303) | 0.410665 / 0.283200 (0.127466) | 0.020127 / 0.141683 (-0.121555) | 1.558554 / 1.452155 (0.106400) | 1.590959 / 1.492716 (0.098242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208826 / 0.018006 (0.190820) | 0.432037 / 0.000490 (0.431547) | 0.006509 / 0.000200 (0.006309) | 0.000285 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023460 / 0.037411 (-0.013951) | 0.099070 / 0.014526 (0.084545) | 0.105771 / 0.176557 (-0.070785) | 0.166683 / 0.737135 (-0.570452) | 0.108755 / 0.296338 (-0.187583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424324 / 0.215209 (0.209115) | 4.225696 / 2.077655 (2.148042) | 1.910955 / 1.504120 (0.406835) | 1.704493 / 1.541195 (0.163298) | 1.782784 / 1.468490 (0.314293) | 0.562927 / 4.584777 (-4.021850) | 3.380163 / 3.745712 (-0.365550) | 1.779641 / 5.269862 (-3.490221) | 1.029134 / 4.565676 (-3.536543) | 0.068325 / 0.424275 (-0.355950) | 0.011528 / 0.007607 (0.003921) | 0.530141 / 0.226044 (0.304097) | 5.323443 / 2.268929 (3.054514) | 2.346956 / 55.444624 (-53.097668) | 2.013335 / 6.876477 (-4.863142) | 2.118531 / 2.142072 (-0.023541) | 0.675206 / 4.805227 (-4.130021) | 0.135473 / 6.500664 (-6.365191) | 0.064804 / 0.075469 (-0.010665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240179 / 1.841788 (-0.601608) | 14.692449 / 8.074308 (6.618141) | 13.672223 / 10.191392 (3.480831) | 0.147748 / 0.680424 (-0.532676) | 0.017119 / 0.534201 (-0.517082) | 0.369481 / 0.579283 (-0.209802) | 0.390133 / 0.434364 (-0.044231) | 0.458768 / 0.540337 (-0.081569) | 0.548989 / 1.386936 (-0.837947) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006319 / 0.011353 (-0.005034) | 0.003975 / 0.011008 (-0.007033) | 0.077886 / 0.038508 (0.039378) | 0.038322 / 0.023109 (0.015213) | 0.379851 / 0.275898 (0.103953) | 0.456749 / 0.323480 (0.133269) | 0.005320 / 0.007986 (-0.002665) | 0.003135 / 0.004328 (-0.001194) | 0.078272 / 0.004250 (0.074022) | 0.059919 / 0.037052 (0.022866) | 0.430062 / 0.258489 (0.171573) | 0.477432 / 0.293841 (0.183591) | 0.029713 / 0.128546 (-0.098833) | 0.008704 / 0.075646 (-0.066942) | 0.082488 / 0.419271 (-0.336784) | 0.044667 / 0.043533 (0.001134) | 0.354910 / 0.255139 (0.099771) | 0.434637 / 0.283200 (0.151438) | 0.026402 / 0.141683 (-0.115281) | 1.528825 / 1.452155 (0.076671) | 1.548209 / 1.492716 (0.055493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237988 / 0.018006 (0.219982) | 0.420402 / 0.000490 (0.419913) | 0.003098 / 0.000200 (0.002898) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026253 / 0.037411 (-0.011159) | 0.106137 / 0.014526 (0.091611) | 0.110273 / 0.176557 (-0.066284) | 0.165316 / 0.737135 (-0.571819) | 0.115720 / 0.296338 (-0.180619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454244 / 0.215209 (0.239035) | 4.526018 / 2.077655 (2.448364) | 2.395985 / 1.504120 (0.891865) | 2.234822 / 1.541195 (0.693627) | 2.370235 / 1.468490 (0.901745) | 0.567607 / 4.584777 (-4.017169) | 3.650156 / 3.745712 (-0.095556) | 3.360094 / 5.269862 (-1.909768) | 1.415252 / 4.565676 (-3.150424) | 0.068012 / 0.424275 (-0.356263) | 0.011135 / 0.007607 (0.003528) | 0.561967 / 0.226044 (0.335923) | 5.621819 / 2.268929 (3.352890) | 2.676912 / 55.444624 (-52.767712) | 2.338306 / 6.876477 (-4.538171) | 2.430888 / 2.142072 (0.288815) | 0.684576 / 4.805227 (-4.120651) | 0.138923 / 6.500664 (-6.361741) | 0.069933 / 0.075469 (-0.005536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313383 / 1.841788 (-0.528405) | 15.125088 / 8.074308 (7.050780) | 14.801501 / 10.191392 (4.610109) | 0.134235 / 0.680424 (-0.546189) | 0.017058 / 0.534201 (-0.517143) | 0.365166 / 0.579283 (-0.214117) | 0.395415 / 0.434364 (-0.038949) | 0.419355 / 0.540337 (-0.120983) | 0.513411 / 1.386936 (-0.873525) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8b9649b3cfb49342e44873ce7e29e0c75eaf3efa \"CML watermark\")\n"
] | 1,688,130,950,000 | 1,688,134,712,000 | 1,688,134,284,000 | CONTRIBUTOR | null | Fix #5998 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6001/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6001",
"html_url": "https://github.com/huggingface/datasets/pull/6001",
"diff_url": "https://github.com/huggingface/datasets/pull/6001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6001.patch",
"merged_at": "2023-06-30T14:11:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6000/comments | https://api.github.com/repos/huggingface/datasets/issues/6000/events | https://github.com/huggingface/datasets/pull/6000 | 1,782,456,878 | PR_kwDODunzps5UU_FB | 6,000 | Pin `joblib` to avoid `joblibspark` test failures | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004631) | 0.004425 / 0.011008 (-0.006583) | 0.100850 / 0.038508 (0.062341) | 0.040816 / 0.023109 (0.017707) | 0.348823 / 0.275898 (0.072925) | 0.446285 / 0.323480 (0.122805) | 0.005738 / 0.007986 (-0.002247) | 0.003517 / 0.004328 (-0.000811) | 0.078824 / 0.004250 (0.074574) | 0.064695 / 0.037052 (0.027643) | 0.389894 / 0.258489 (0.131405) | 0.416107 / 0.293841 (0.122266) | 0.028850 / 0.128546 (-0.099696) | 0.009011 / 0.075646 (-0.066635) | 0.323117 / 0.419271 (-0.096154) | 0.049162 / 0.043533 (0.005629) | 0.340144 / 0.255139 (0.085005) | 0.382072 / 0.283200 (0.098872) | 0.023160 / 0.141683 (-0.118523) | 1.549218 / 1.452155 (0.097063) | 1.581266 / 1.492716 (0.088550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.293360 / 0.018006 (0.275353) | 0.602189 / 0.000490 (0.601700) | 0.004608 / 0.000200 (0.004408) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028144 / 0.037411 (-0.009267) | 0.107088 / 0.014526 (0.092562) | 0.112188 / 0.176557 (-0.064369) | 0.174669 / 0.737135 (-0.562466) | 0.116359 / 0.296338 (-0.179980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422911 / 0.215209 (0.207702) | 4.231524 / 2.077655 (2.153869) | 1.906711 / 1.504120 (0.402591) | 1.706841 / 1.541195 (0.165646) | 1.792066 / 1.468490 (0.323576) | 0.559221 / 4.584777 (-4.025556) | 3.434280 / 3.745712 (-0.311433) | 1.918714 / 5.269862 (-3.351148) | 1.073070 / 4.565676 (-3.492606) | 0.067891 / 0.424275 (-0.356384) | 0.011927 / 0.007607 (0.004320) | 0.530843 / 0.226044 (0.304799) | 5.309213 / 2.268929 (3.040285) | 2.439246 / 55.444624 (-53.005378) | 2.101245 / 6.876477 (-4.775231) | 2.177436 / 2.142072 (0.035363) | 0.672150 / 4.805227 (-4.133077) | 0.137571 / 6.500664 (-6.363093) | 0.068343 / 0.075469 (-0.007126) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265262 / 1.841788 (-0.576525) | 14.988021 / 8.074308 (6.913713) | 13.611677 / 10.191392 (3.420285) | 0.171389 / 0.680424 (-0.509035) | 0.017681 / 0.534201 (-0.516520) | 0.377542 / 0.579283 (-0.201741) | 0.399475 / 0.434364 (-0.034889) | 0.469553 / 0.540337 (-0.070785) | 0.561888 / 1.386936 (-0.825048) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006782 / 0.011353 (-0.004571) | 0.004412 / 0.011008 (-0.006597) | 0.078594 / 0.038508 (0.040086) | 0.039930 / 0.023109 (0.016820) | 0.371879 / 0.275898 (0.095981) | 0.444910 / 0.323480 (0.121430) | 0.005707 / 0.007986 (-0.002279) | 0.003901 / 0.004328 (-0.000427) | 0.080125 / 0.004250 (0.075875) | 0.063977 / 0.037052 (0.026925) | 0.382781 / 0.258489 (0.124292) | 0.441791 / 0.293841 (0.147950) | 0.030428 / 0.128546 (-0.098118) | 0.009008 / 0.075646 (-0.066638) | 0.084447 / 0.419271 (-0.334824) | 0.044432 / 0.043533 (0.000899) | 0.365686 / 0.255139 (0.110547) | 0.394312 / 0.283200 (0.111113) | 0.024508 / 0.141683 (-0.117175) | 1.577020 / 1.452155 (0.124865) | 1.630259 / 1.492716 (0.137543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307960 / 0.018006 (0.289953) | 0.591473 / 0.000490 (0.590983) | 0.008098 / 0.000200 (0.007898) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029567 / 0.037411 (-0.007845) | 0.112773 / 0.014526 (0.098247) | 0.117362 / 0.176557 (-0.059194) | 0.174293 / 0.737135 (-0.562843) | 0.123156 / 0.296338 (-0.173182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457475 / 0.215209 (0.242266) | 4.599067 / 2.077655 (2.521412) | 2.262638 / 1.504120 (0.758518) | 2.124943 / 1.541195 (0.583748) | 2.339912 / 1.468490 (0.871422) | 0.566264 / 4.584777 (-4.018513) | 3.489261 / 3.745712 (-0.256451) | 1.925151 / 5.269862 (-3.344711) | 1.099389 / 4.565676 (-3.466287) | 0.068232 / 0.424275 (-0.356043) | 0.011660 / 0.007607 (0.004052) | 0.571227 / 0.226044 (0.345183) | 5.702059 / 2.268929 (3.433130) | 2.837701 / 55.444624 (-52.606924) | 2.605468 / 6.876477 (-4.271008) | 2.818396 / 2.142072 (0.676323) | 0.681856 / 4.805227 (-4.123371) | 0.141401 / 6.500664 (-6.359263) | 0.069728 / 0.075469 (-0.005741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354935 / 1.841788 (-0.486853) | 15.437404 / 8.074308 (7.363095) | 15.415193 / 10.191392 (5.223801) | 0.153459 / 0.680424 (-0.526964) | 0.017190 / 0.534201 (-0.517011) | 0.367256 / 0.579283 (-0.212027) | 0.392709 / 0.434364 (-0.041655) | 0.426125 / 0.540337 (-0.114213) | 0.522612 / 1.386936 (-0.864324) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#25ac13d8ab23e7d99252ce083a45e8333b6bbcdc \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009183 / 0.011353 (-0.002170) | 0.005232 / 0.011008 (-0.005776) | 0.120349 / 0.038508 (0.081841) | 0.044715 / 0.023109 (0.021606) | 0.361519 / 0.275898 (0.085621) | 0.463702 / 0.323480 (0.140223) | 0.005842 / 0.007986 (-0.002144) | 0.004041 / 0.004328 (-0.000288) | 0.096953 / 0.004250 (0.092703) | 0.070593 / 0.037052 (0.033540) | 0.409790 / 0.258489 (0.151301) | 0.477452 / 0.293841 (0.183611) | 0.045827 / 0.128546 (-0.082719) | 0.014038 / 0.075646 (-0.061608) | 0.421317 / 0.419271 (0.002045) | 0.065276 / 0.043533 (0.021743) | 0.360074 / 0.255139 (0.104935) | 0.409147 / 0.283200 (0.125947) | 0.032444 / 0.141683 (-0.109238) | 1.739257 / 1.452155 (0.287102) | 1.831408 / 1.492716 (0.338692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274852 / 0.018006 (0.256846) | 0.596320 / 0.000490 (0.595830) | 0.006399 / 0.000200 (0.006199) | 0.000133 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031400 / 0.037411 (-0.006012) | 0.127052 / 0.014526 (0.112526) | 0.134269 / 0.176557 (-0.042288) | 0.225998 / 0.737135 (-0.511137) | 0.150019 / 0.296338 (-0.146319) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654202 / 0.215209 (0.438993) | 6.216735 / 2.077655 (4.139081) | 2.440214 / 1.504120 (0.936094) | 2.150575 / 1.541195 (0.609380) | 2.124790 / 1.468490 (0.656300) | 0.923514 / 4.584777 (-3.661263) | 5.556924 / 3.745712 (1.811212) | 2.843886 / 5.269862 (-2.425975) | 1.834232 / 4.565676 (-2.731444) | 0.111735 / 0.424275 (-0.312540) | 0.014823 / 0.007607 (0.007216) | 0.820503 / 0.226044 (0.594459) | 7.887737 / 2.268929 (5.618809) | 3.120307 / 55.444624 (-52.324317) | 2.405856 / 6.876477 (-4.470621) | 2.411239 / 2.142072 (0.269167) | 1.071283 / 4.805227 (-3.733944) | 0.227738 / 6.500664 (-6.272926) | 0.073516 / 0.075469 (-0.001953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.531806 / 1.841788 (-0.309982) | 18.547661 / 8.074308 (10.473353) | 21.083922 / 10.191392 (10.892530) | 0.241706 / 0.680424 (-0.438718) | 0.034169 / 0.534201 (-0.500032) | 0.497514 / 0.579283 (-0.081769) | 0.599801 / 0.434364 (0.165437) | 0.576465 / 0.540337 (0.036127) | 0.673509 / 1.386936 (-0.713427) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007558 / 0.011353 (-0.003795) | 0.005001 / 0.011008 (-0.006008) | 0.093809 / 0.038508 (0.055301) | 0.039792 / 0.023109 (0.016683) | 0.456869 / 0.275898 (0.180971) | 0.493370 / 0.323480 (0.169891) | 0.005561 / 0.007986 (-0.002424) | 0.003982 / 0.004328 (-0.000346) | 0.085421 / 0.004250 (0.081170) | 0.059817 / 0.037052 (0.022765) | 0.468040 / 0.258489 (0.209550) | 0.514853 / 0.293841 (0.221012) | 0.044267 / 0.128546 (-0.084279) | 0.012674 / 0.075646 (-0.062972) | 0.098324 / 0.419271 (-0.320948) | 0.056604 / 0.043533 (0.013071) | 0.432200 / 0.255139 (0.177061) | 0.459812 / 0.283200 (0.176612) | 0.033872 / 0.141683 (-0.107811) | 1.618576 / 1.452155 (0.166421) | 1.676562 / 1.492716 (0.183846) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230625 / 0.018006 (0.212619) | 0.600558 / 0.000490 (0.600068) | 0.003419 / 0.000200 (0.003219) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026916 / 0.037411 (-0.010496) | 0.103003 / 0.014526 (0.088478) | 0.117078 / 0.176557 (-0.059478) | 0.169359 / 0.737135 (-0.567776) | 0.120305 / 0.296338 (-0.176034) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616877 / 0.215209 (0.401668) | 6.157232 / 2.077655 (4.079577) | 2.869219 / 1.504120 (1.365099) | 2.381410 / 1.541195 (0.840216) | 2.417357 / 1.468490 (0.948867) | 0.914947 / 4.584777 (-3.669830) | 5.718526 / 3.745712 (1.972814) | 2.757253 / 5.269862 (-2.512609) | 1.794122 / 4.565676 (-2.771554) | 0.108423 / 0.424275 (-0.315852) | 0.013378 / 0.007607 (0.005771) | 0.831067 / 0.226044 (0.605023) | 8.478946 / 2.268929 (6.210018) | 3.685937 / 55.444624 (-51.758687) | 2.867472 / 6.876477 (-4.009005) | 2.895975 / 2.142072 (0.753903) | 1.137547 / 4.805227 (-3.667681) | 0.213891 / 6.500664 (-6.286773) | 0.075825 / 0.075469 (0.000356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621193 / 1.841788 (-0.220594) | 17.322110 / 8.074308 (9.247802) | 21.804016 / 10.191392 (11.612624) | 0.243692 / 0.680424 (-0.436732) | 0.030331 / 0.534201 (-0.503870) | 0.492186 / 0.579283 (-0.087097) | 0.632583 / 0.434364 (0.198219) | 0.576265 / 0.540337 (0.035927) | 0.713165 / 1.386936 (-0.673771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a293ceb5aa41c4ae265c0e2aa9ada2d544466121 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008916 / 0.011353 (-0.002437) | 0.004737 / 0.011008 (-0.006271) | 0.134271 / 0.038508 (0.095763) | 0.054472 / 0.023109 (0.031363) | 0.380942 / 0.275898 (0.105044) | 0.474138 / 0.323480 (0.150658) | 0.007917 / 0.007986 (-0.000068) | 0.003748 / 0.004328 (-0.000580) | 0.092765 / 0.004250 (0.088515) | 0.077873 / 0.037052 (0.040821) | 0.397533 / 0.258489 (0.139043) | 0.454737 / 0.293841 (0.160896) | 0.039901 / 0.128546 (-0.088645) | 0.010188 / 0.075646 (-0.065458) | 0.447312 / 0.419271 (0.028040) | 0.068684 / 0.043533 (0.025151) | 0.371554 / 0.255139 (0.116415) | 0.459655 / 0.283200 (0.176455) | 0.027157 / 0.141683 (-0.114526) | 1.874643 / 1.452155 (0.422488) | 2.014800 / 1.492716 (0.522083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227079 / 0.018006 (0.209073) | 0.483241 / 0.000490 (0.482751) | 0.012404 / 0.000200 (0.012204) | 0.000409 / 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033135 / 0.037411 (-0.004277) | 0.137782 / 0.014526 (0.123257) | 0.142951 / 0.176557 (-0.033605) | 0.209825 / 0.737135 (-0.527311) | 0.152438 / 0.296338 (-0.143900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513066 / 0.215209 (0.297857) | 5.122776 / 2.077655 (3.045121) | 2.399270 / 1.504120 (0.895150) | 2.180143 / 1.541195 (0.638949) | 2.286395 / 1.468490 (0.817905) | 0.641866 / 4.584777 (-3.942911) | 4.694922 / 3.745712 (0.949210) | 2.543390 / 5.269862 (-2.726472) | 1.398592 / 4.565676 (-3.167084) | 0.088662 / 0.424275 (-0.335613) | 0.015854 / 0.007607 (0.008247) | 0.688891 / 0.226044 (0.462847) | 6.370148 / 2.268929 (4.101220) | 2.949974 / 55.444624 (-52.494650) | 2.538049 / 6.876477 (-4.338428) | 2.699380 / 2.142072 (0.557308) | 0.792670 / 4.805227 (-4.012557) | 0.169126 / 6.500664 (-6.331538) | 0.078511 / 0.075469 (0.003042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609119 / 1.841788 (-0.232669) | 18.785069 / 8.074308 (10.710761) | 16.670783 / 10.191392 (6.479391) | 0.213081 / 0.680424 (-0.467343) | 0.023904 / 0.534201 (-0.510296) | 0.567720 / 0.579283 (-0.011564) | 0.505806 / 0.434364 (0.071442) | 0.649466 / 0.540337 (0.109129) | 0.773174 / 1.386936 (-0.613762) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008036 / 0.011353 (-0.003317) | 0.004808 / 0.011008 (-0.006201) | 0.094316 / 0.038508 (0.055808) | 0.056174 / 0.023109 (0.033065) | 0.481618 / 0.275898 (0.205720) | 0.565300 / 0.323480 (0.241820) | 0.006339 / 0.007986 (-0.001646) | 0.003950 / 0.004328 (-0.000379) | 0.093389 / 0.004250 (0.089139) | 0.076163 / 0.037052 (0.039111) | 0.489013 / 0.258489 (0.230524) | 0.565451 / 0.293841 (0.271611) | 0.039392 / 0.128546 (-0.089155) | 0.010553 / 0.075646 (-0.065093) | 0.101406 / 0.419271 (-0.317865) | 0.062355 / 0.043533 (0.018822) | 0.470461 / 0.255139 (0.215322) | 0.502574 / 0.283200 (0.219375) | 0.030196 / 0.141683 (-0.111486) | 1.893926 / 1.452155 (0.441771) | 1.958902 / 1.492716 (0.466185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198074 / 0.018006 (0.180068) | 0.476828 / 0.000490 (0.476338) | 0.003457 / 0.000200 (0.003257) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037576 / 0.037411 (0.000165) | 0.146663 / 0.014526 (0.132138) | 0.152969 / 0.176557 (-0.023588) | 0.218683 / 0.737135 (-0.518452) | 0.161552 / 0.296338 (-0.134786) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.525988 / 0.215209 (0.310779) | 5.234673 / 2.077655 (3.157018) | 2.571668 / 1.504120 (1.067548) | 2.339760 / 1.541195 (0.798565) | 2.422886 / 1.468490 (0.954395) | 0.651537 / 4.584777 (-3.933240) | 4.811148 / 3.745712 (1.065436) | 4.451165 / 5.269862 (-0.818697) | 2.016283 / 4.565676 (-2.549394) | 0.096393 / 0.424275 (-0.327882) | 0.015222 / 0.007607 (0.007615) | 0.739132 / 0.226044 (0.513087) | 6.813327 / 2.268929 (4.544399) | 3.169018 / 55.444624 (-52.275606) | 2.783120 / 6.876477 (-4.093356) | 2.918979 / 2.142072 (0.776907) | 0.797476 / 4.805227 (-4.007751) | 0.171038 / 6.500664 (-6.329626) | 0.079878 / 0.075469 (0.004409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595082 / 1.841788 (-0.246705) | 19.685844 / 8.074308 (11.611536) | 17.518989 / 10.191392 (7.327597) | 0.220015 / 0.680424 (-0.460409) | 0.026351 / 0.534201 (-0.507850) | 0.578977 / 0.579283 (-0.000306) | 0.549564 / 0.434364 (0.115200) | 0.667564 / 0.540337 (0.127227) | 0.802121 / 1.386936 (-0.584815) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e9aee64766aaddfda60a735cfc93345aed64bdcf \"CML watermark\")\n"
] | 1,688,128,614,000 | 1,688,131,025,000 | 1,688,130,507,000 | CONTRIBUTOR | null | `joblibspark` doesn't support the latest `joblib` release.
See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6000/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6000",
"html_url": "https://github.com/huggingface/datasets/pull/6000",
"diff_url": "https://github.com/huggingface/datasets/pull/6000.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6000.patch",
"merged_at": "2023-06-30T13:08:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5999/comments | https://api.github.com/repos/huggingface/datasets/issues/5999/events | https://github.com/huggingface/datasets/issues/5999 | 1,781,851,513 | I_kwDODunzps5qNOV5 | 5,999 | Getting a 409 error while loading xglue dataset | {
"login": "Praful932",
"id": 45713796,
"node_id": "MDQ6VXNlcjQ1NzEzNzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Praful932",
"html_url": "https://github.com/Praful932",
"followers_url": "https://api.github.com/users/Praful932/followers",
"following_url": "https://api.github.com/users/Praful932/following{/other_user}",
"gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Praful932/subscriptions",
"organizations_url": "https://api.github.com/users/Praful932/orgs",
"repos_url": "https://api.github.com/users/Praful932/repos",
"events_url": "https://api.github.com/users/Praful932/events{/privacy}",
"received_events_url": "https://api.github.com/users/Praful932/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https://huggingface.co/datasets/xglue/discussions/5"
] | 1,688,098,434,000 | 1,688,104,643,000 | 1,688,104,642,000 | NONE | null | ### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the dataset to load
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5999/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5998/comments | https://api.github.com/repos/huggingface/datasets/issues/5998/events | https://github.com/huggingface/datasets/issues/5998 | 1,781,805,018 | I_kwDODunzps5qNC_a | 5,998 | The current implementation has a potential bug in the sort method | {
"login": "wangyuxinwhy",
"id": 22192665,
"node_id": "MDQ6VXNlcjIyMTkyNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangyuxinwhy",
"html_url": "https://github.com/wangyuxinwhy",
"followers_url": "https://api.github.com/users/wangyuxinwhy/followers",
"following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}",
"gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions",
"organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs",
"repos_url": "https://api.github.com/users/wangyuxinwhy/repos",
"events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reporting, @wangyuxinwhy. "
] | 1,688,095,017,000 | 1,688,134,863,000 | 1,688,134,285,000 | NONE | null | ### Describe the bug
In the sort method,here's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
Of course, after I modified the tuple into a list, everything worked fine
Change the code to the following so there will be no problem
```python
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
if isinstance(column_names, str):
column_names = [column_names]
else:
column_names = list(column_names)
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
### Expected behavior
Passing tuple into column_names should be equivalent to passing list
### Environment info
- `datasets` version: 2.13.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5998/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5997/comments | https://api.github.com/repos/huggingface/datasets/issues/5997/events | https://github.com/huggingface/datasets/issues/5997 | 1,781,582,818 | I_kwDODunzps5qMMvi | 5,997 | extend the map function so it can wrap around long text that does not fit in the context window | {
"login": "siddhsql",
"id": 127623723,
"node_id": "U_kgDOB5tiKw",
"avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddhsql",
"html_url": "https://github.com/siddhsql",
"followers_url": "https://api.github.com/users/siddhsql/followers",
"following_url": "https://api.github.com/users/siddhsql/following{/other_user}",
"gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions",
"organizations_url": "https://api.github.com/users/siddhsql/orgs",
"repos_url": "https://api.github.com/users/siddhsql/repos",
"events_url": "https://api.github.com/users/siddhsql/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddhsql/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"I just noticed the [docs](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples.\r\n\r\nso maybe this is a bug then.",
"All the values in a batch must be of the same length. So one solution is dropping all the input columns:\r\n```python\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nAnother is padding/transforming the input columns to the tokenizer output's length (447). "
] | 1,688,076,921,000 | 1,688,407,132,000 | null | NONE | null | ### Feature request
I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530):
```
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
```
but running the code gives me this error:
```
File "/llm/fine-tune.py", line 117, in <module>
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single
writer.write_batch(batch)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447
```
The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it.
### Motivation
please see above
### Your contribution
I'm afraid I don't have much knowledge to help | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5997/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5996/comments | https://api.github.com/repos/huggingface/datasets/issues/5996/events | https://github.com/huggingface/datasets/pull/5996 | 1,779,294,374 | PR_kwDODunzps5UKP0i | 5,996 | Deprecate `use_auth_token` in favor of `token` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006134 / 0.011353 (-0.005219) | 0.003816 / 0.011008 (-0.007193) | 0.098226 / 0.038508 (0.059718) | 0.036830 / 0.023109 (0.013721) | 0.314551 / 0.275898 (0.038653) | 0.372251 / 0.323480 (0.048771) | 0.004762 / 0.007986 (-0.003224) | 0.003041 / 0.004328 (-0.001287) | 0.077651 / 0.004250 (0.073401) | 0.052445 / 0.037052 (0.015393) | 0.324632 / 0.258489 (0.066143) | 0.365724 / 0.293841 (0.071883) | 0.028069 / 0.128546 (-0.100477) | 0.008444 / 0.075646 (-0.067203) | 0.312767 / 0.419271 (-0.106505) | 0.047773 / 0.043533 (0.004240) | 0.305317 / 0.255139 (0.050178) | 0.332007 / 0.283200 (0.048807) | 0.018985 / 0.141683 (-0.122698) | 1.538022 / 1.452155 (0.085868) | 1.575898 / 1.492716 (0.083182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204780 / 0.018006 (0.186774) | 0.428125 / 0.000490 (0.427635) | 0.003454 / 0.000200 (0.003254) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025064 / 0.037411 (-0.012348) | 0.099419 / 0.014526 (0.084893) | 0.111068 / 0.176557 (-0.065489) | 0.169775 / 0.737135 (-0.567361) | 0.112067 / 0.296338 (-0.184271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429642 / 0.215209 (0.214433) | 4.275556 / 2.077655 (2.197901) | 1.914658 / 1.504120 (0.410539) | 1.706556 / 1.541195 (0.165361) | 1.754228 / 1.468490 (0.285738) | 0.563669 / 4.584777 (-4.021108) | 3.391501 / 3.745712 (-0.354211) | 1.791517 / 5.269862 (-3.478345) | 1.030704 / 4.565676 (-3.534973) | 0.070882 / 0.424275 (-0.353393) | 0.011351 / 0.007607 (0.003744) | 0.529438 / 0.226044 (0.303394) | 5.294316 / 2.268929 (3.025387) | 2.344653 / 55.444624 (-53.099972) | 1.997468 / 6.876477 (-4.879009) | 2.108932 / 2.142072 (-0.033140) | 0.676794 / 4.805227 (-4.128433) | 0.135058 / 6.500664 (-6.365607) | 0.065857 / 0.075469 (-0.009612) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.231864 / 1.841788 (-0.609924) | 13.986694 / 8.074308 (5.912386) | 13.306600 / 10.191392 (3.115208) | 0.145520 / 0.680424 (-0.534904) | 0.016717 / 0.534201 (-0.517484) | 0.366303 / 0.579283 (-0.212980) | 0.391637 / 0.434364 (-0.042727) | 0.425445 / 0.540337 (-0.114892) | 0.507719 / 1.386936 (-0.879217) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006236 / 0.011353 (-0.005116) | 0.003766 / 0.011008 (-0.007242) | 0.076794 / 0.038508 (0.038286) | 0.037210 / 0.023109 (0.014101) | 0.378387 / 0.275898 (0.102489) | 0.425456 / 0.323480 (0.101977) | 0.004694 / 0.007986 (-0.003291) | 0.002921 / 0.004328 (-0.001407) | 0.076985 / 0.004250 (0.072735) | 0.052188 / 0.037052 (0.015136) | 0.394385 / 0.258489 (0.135896) | 0.432527 / 0.293841 (0.138686) | 0.029091 / 0.128546 (-0.099455) | 0.008364 / 0.075646 (-0.067282) | 0.082583 / 0.419271 (-0.336689) | 0.042928 / 0.043533 (-0.000605) | 0.375321 / 0.255139 (0.120182) | 0.391719 / 0.283200 (0.108519) | 0.019388 / 0.141683 (-0.122295) | 1.550644 / 1.452155 (0.098489) | 1.604882 / 1.492716 (0.112166) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236859 / 0.018006 (0.218853) | 0.418528 / 0.000490 (0.418039) | 0.000388 / 0.000200 (0.000188) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025548 / 0.037411 (-0.011863) | 0.100644 / 0.014526 (0.086118) | 0.109102 / 0.176557 (-0.067455) | 0.161694 / 0.737135 (-0.575441) | 0.112088 / 0.296338 (-0.184250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.484128 / 0.215209 (0.268919) | 4.849952 / 2.077655 (2.772297) | 2.512769 / 1.504120 (1.008649) | 2.303295 / 1.541195 (0.762100) | 2.356699 / 1.468490 (0.888209) | 0.564181 / 4.584777 (-4.020596) | 3.421393 / 3.745712 (-0.324319) | 2.570875 / 5.269862 (-2.698987) | 1.474307 / 4.565676 (-3.091370) | 0.068035 / 0.424275 (-0.356240) | 0.011300 / 0.007607 (0.003693) | 0.587867 / 0.226044 (0.361823) | 5.862447 / 2.268929 (3.593519) | 3.004017 / 55.444624 (-52.440607) | 2.664989 / 6.876477 (-4.211488) | 2.740020 / 2.142072 (0.597948) | 0.680840 / 4.805227 (-4.124387) | 0.137001 / 6.500664 (-6.363663) | 0.068098 / 0.075469 (-0.007371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297362 / 1.841788 (-0.544426) | 14.207891 / 8.074308 (6.133583) | 14.087562 / 10.191392 (3.896170) | 0.149514 / 0.680424 (-0.530910) | 0.016566 / 0.534201 (-0.517635) | 0.367602 / 0.579283 (-0.211681) | 0.400692 / 0.434364 (-0.033671) | 0.432907 / 0.540337 (-0.107431) | 0.525924 / 1.386936 (-0.861012) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ec069feaaf6c28d4e4df76d344693b591a74c3f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006223 / 0.011353 (-0.005130) | 0.003672 / 0.011008 (-0.007336) | 0.097451 / 0.038508 (0.058943) | 0.036243 / 0.023109 (0.013133) | 0.375650 / 0.275898 (0.099752) | 0.431652 / 0.323480 (0.108172) | 0.004758 / 0.007986 (-0.003227) | 0.002941 / 0.004328 (-0.001387) | 0.077383 / 0.004250 (0.073132) | 0.055342 / 0.037052 (0.018289) | 0.390335 / 0.258489 (0.131846) | 0.427867 / 0.293841 (0.134026) | 0.027619 / 0.128546 (-0.100927) | 0.008244 / 0.075646 (-0.067402) | 0.313499 / 0.419271 (-0.105773) | 0.054987 / 0.043533 (0.011454) | 0.394044 / 0.255139 (0.138905) | 0.398784 / 0.283200 (0.115584) | 0.026499 / 0.141683 (-0.115184) | 1.496907 / 1.452155 (0.044753) | 1.554465 / 1.492716 (0.061749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241197 / 0.018006 (0.223190) | 0.427856 / 0.000490 (0.427366) | 0.006264 / 0.000200 (0.006065) | 0.000218 / 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025550 / 0.037411 (-0.011862) | 0.104426 / 0.014526 (0.089901) | 0.110310 / 0.176557 (-0.066246) | 0.173813 / 0.737135 (-0.563322) | 0.112129 / 0.296338 (-0.184209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458806 / 0.215209 (0.243597) | 4.576351 / 2.077655 (2.498697) | 2.265670 / 1.504120 (0.761550) | 2.073230 / 1.541195 (0.532035) | 2.135283 / 1.468490 (0.666793) | 0.562506 / 4.584777 (-4.022271) | 3.375101 / 3.745712 (-0.370611) | 1.734393 / 5.269862 (-3.535469) | 1.026622 / 4.565676 (-3.539054) | 0.068144 / 0.424275 (-0.356131) | 0.011092 / 0.007607 (0.003485) | 0.562779 / 0.226044 (0.336734) | 5.608256 / 2.268929 (3.339328) | 2.706468 / 55.444624 (-52.738157) | 2.381607 / 6.876477 (-4.494869) | 2.451027 / 2.142072 (0.308954) | 0.671590 / 4.805227 (-4.133637) | 0.135749 / 6.500664 (-6.364915) | 0.065389 / 0.075469 (-0.010080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244806 / 1.841788 (-0.596981) | 14.042150 / 8.074308 (5.967841) | 14.246612 / 10.191392 (4.055220) | 0.134309 / 0.680424 (-0.546114) | 0.017082 / 0.534201 (-0.517119) | 0.366043 / 0.579283 (-0.213240) | 0.400748 / 0.434364 (-0.033616) | 0.425695 / 0.540337 (-0.114643) | 0.509355 / 1.386936 (-0.877581) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006134 / 0.011353 (-0.005219) | 0.003980 / 0.011008 (-0.007028) | 0.078353 / 0.038508 (0.039845) | 0.038011 / 0.023109 (0.014902) | 0.375784 / 0.275898 (0.099886) | 0.433619 / 0.323480 (0.110139) | 0.004897 / 0.007986 (-0.003088) | 0.002981 / 0.004328 (-0.001347) | 0.077362 / 0.004250 (0.073112) | 0.056108 / 0.037052 (0.019056) | 0.395984 / 0.258489 (0.137495) | 0.427397 / 0.293841 (0.133556) | 0.029325 / 0.128546 (-0.099221) | 0.008498 / 0.075646 (-0.067148) | 0.082478 / 0.419271 (-0.336794) | 0.044085 / 0.043533 (0.000552) | 0.389923 / 0.255139 (0.134784) | 0.391180 / 0.283200 (0.107980) | 0.022452 / 0.141683 (-0.119231) | 1.507758 / 1.452155 (0.055603) | 1.530459 / 1.492716 (0.037743) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230928 / 0.018006 (0.212922) | 0.408484 / 0.000490 (0.407995) | 0.000806 / 0.000200 (0.000606) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025183 / 0.037411 (-0.012228) | 0.102292 / 0.014526 (0.087766) | 0.108142 / 0.176557 (-0.068415) | 0.161172 / 0.737135 (-0.575963) | 0.114476 / 0.296338 (-0.181862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.482978 / 0.215209 (0.267769) | 4.816103 / 2.077655 (2.738448) | 2.505567 / 1.504120 (1.001447) | 2.302598 / 1.541195 (0.761404) | 2.371238 / 1.468490 (0.902748) | 0.567467 / 4.584777 (-4.017310) | 3.363407 / 3.745712 (-0.382306) | 1.746213 / 5.269862 (-3.523649) | 1.035468 / 4.565676 (-3.530208) | 0.068431 / 0.424275 (-0.355844) | 0.011069 / 0.007607 (0.003462) | 0.598241 / 0.226044 (0.372196) | 5.953927 / 2.268929 (3.684999) | 3.007493 / 55.444624 (-52.437132) | 2.629399 / 6.876477 (-4.247078) | 2.737201 / 2.142072 (0.595129) | 0.682456 / 4.805227 (-4.122771) | 0.137613 / 6.500664 (-6.363051) | 0.067941 / 0.075469 (-0.007528) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306015 / 1.841788 (-0.535772) | 14.359240 / 8.074308 (6.284932) | 14.187601 / 10.191392 (3.996209) | 0.138612 / 0.680424 (-0.541812) | 0.016708 / 0.534201 (-0.517493) | 0.366365 / 0.579283 (-0.212918) | 0.396982 / 0.434364 (-0.037382) | 0.426939 / 0.540337 (-0.113398) | 0.520064 / 1.386936 (-0.866872) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#21d0fd041a5eca02d3ee787396216ac613c662ac \"CML watermark\")\n",
"They use `token` and emit a deprecation warning if `use_auth_token` is passed instead (see https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/modeling_utils.py#L1933). \r\n\r\nI think we can update the `examples` scripts after merging this PR.",
"> I think we can update the examples scripts after merging this PR.\r\n\r\nWe should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the `token` arg",
"> We should do a release before updated in the examples scripts no ? That's why it's an option to not have a deprecation warning until transformers and co are updated with the token arg\r\n\r\nThis would avoid the warning only for the latest `datasets` release. TBH, I don't think this is worth the hassle, considering how simple it is to remove it.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007644 / 0.011353 (-0.003709) | 0.004667 / 0.011008 (-0.006341) | 0.117347 / 0.038508 (0.078839) | 0.050620 / 0.023109 (0.027510) | 0.415402 / 0.275898 (0.139504) | 0.485898 / 0.323480 (0.162418) | 0.005848 / 0.007986 (-0.002138) | 0.003736 / 0.004328 (-0.000592) | 0.089798 / 0.004250 (0.085547) | 0.069344 / 0.037052 (0.032292) | 0.441684 / 0.258489 (0.183195) | 0.468972 / 0.293841 (0.175131) | 0.036637 / 0.128546 (-0.091909) | 0.010219 / 0.075646 (-0.065427) | 0.394293 / 0.419271 (-0.024978) | 0.061462 / 0.043533 (0.017929) | 0.409448 / 0.255139 (0.154309) | 0.431557 / 0.283200 (0.148358) | 0.027795 / 0.141683 (-0.113888) | 1.837844 / 1.452155 (0.385690) | 1.862683 / 1.492716 (0.369967) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230500 / 0.018006 (0.212494) | 0.483139 / 0.000490 (0.482649) | 0.006517 / 0.000200 (0.006317) | 0.000143 / 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033152 / 0.037411 (-0.004259) | 0.133673 / 0.014526 (0.119147) | 0.143853 / 0.176557 (-0.032704) | 0.215254 / 0.737135 (-0.521882) | 0.150676 / 0.296338 (-0.145662) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.503796 / 0.215209 (0.288587) | 5.049981 / 2.077655 (2.972326) | 2.399427 / 1.504120 (0.895307) | 2.167635 / 1.541195 (0.626441) | 2.257448 / 1.468490 (0.788958) | 0.641298 / 4.584777 (-3.943479) | 4.828676 / 3.745712 (1.082964) | 4.346069 / 5.269862 (-0.923793) | 2.103890 / 4.565676 (-2.461786) | 0.079115 / 0.424275 (-0.345160) | 0.013377 / 0.007607 (0.005770) | 0.621207 / 0.226044 (0.395162) | 6.190939 / 2.268929 (3.922011) | 2.920129 / 55.444624 (-52.524495) | 2.549225 / 6.876477 (-4.327252) | 2.719221 / 2.142072 (0.577149) | 0.790949 / 4.805227 (-4.014278) | 0.172032 / 6.500664 (-6.328632) | 0.077779 / 0.075469 (0.002310) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432572 / 1.841788 (-0.409216) | 21.000031 / 8.074308 (12.925723) | 17.555093 / 10.191392 (7.363701) | 0.166646 / 0.680424 (-0.513778) | 0.020451 / 0.534201 (-0.513750) | 0.488767 / 0.579283 (-0.090516) | 0.737036 / 0.434364 (0.302672) | 0.621694 / 0.540337 (0.081356) | 0.732074 / 1.386936 (-0.654862) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008198 / 0.011353 (-0.003155) | 0.004987 / 0.011008 (-0.006021) | 0.090714 / 0.038508 (0.052206) | 0.053379 / 0.023109 (0.030270) | 0.425199 / 0.275898 (0.149301) | 0.514036 / 0.323480 (0.190556) | 0.006043 / 0.007986 (-0.001943) | 0.003888 / 0.004328 (-0.000441) | 0.088294 / 0.004250 (0.084043) | 0.073024 / 0.037052 (0.035971) | 0.435983 / 0.258489 (0.177494) | 0.514293 / 0.293841 (0.220452) | 0.039451 / 0.128546 (-0.089095) | 0.010439 / 0.075646 (-0.065207) | 0.096885 / 0.419271 (-0.322387) | 0.060165 / 0.043533 (0.016632) | 0.421053 / 0.255139 (0.165914) | 0.455545 / 0.283200 (0.172345) | 0.027234 / 0.141683 (-0.114449) | 1.768975 / 1.452155 (0.316820) | 1.842853 / 1.492716 (0.350137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278940 / 0.018006 (0.260933) | 0.480709 / 0.000490 (0.480219) | 0.000436 / 0.000200 (0.000236) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034900 / 0.037411 (-0.002511) | 0.144893 / 0.014526 (0.130368) | 0.149567 / 0.176557 (-0.026989) | 0.213200 / 0.737135 (-0.523935) | 0.156735 / 0.296338 (-0.139604) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535897 / 0.215209 (0.320687) | 5.336998 / 2.077655 (3.259343) | 2.685854 / 1.504120 (1.181734) | 2.470177 / 1.541195 (0.928983) | 2.547495 / 1.468490 (1.079004) | 0.642830 / 4.584777 (-3.941947) | 4.595866 / 3.745712 (0.850154) | 2.186696 / 5.269862 (-3.083165) | 1.317969 / 4.565676 (-3.247708) | 0.079268 / 0.424275 (-0.345007) | 0.013792 / 0.007607 (0.006185) | 0.662236 / 0.226044 (0.436192) | 6.604775 / 2.268929 (4.335847) | 3.355888 / 55.444624 (-52.088736) | 2.968911 / 6.876477 (-3.907565) | 3.121862 / 2.142072 (0.979790) | 0.794752 / 4.805227 (-4.010475) | 0.170800 / 6.500664 (-6.329864) | 0.078393 / 0.075469 (0.002924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.601605 / 1.841788 (-0.240183) | 20.743553 / 8.074308 (12.669245) | 17.543968 / 10.191392 (7.352576) | 0.221884 / 0.680424 (-0.458540) | 0.020779 / 0.534201 (-0.513422) | 0.479677 / 0.579283 (-0.099606) | 0.516207 / 0.434364 (0.081843) | 0.564046 / 0.540337 (0.023709) | 0.711336 / 1.386936 (-0.675600) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#819bb4346434912eb405ce3f3e9f21dc25a2fe85 \"CML watermark\")\n",
"Yes, sounds great! Thanks",
"yup"
] | 1,687,969,598,000 | 1,688,570,540,000 | 1,688,400,213,000 | CONTRIBUTOR | null | ... to be consistent with `transformers` and `huggingface_hub`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5996/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5996",
"html_url": "https://github.com/huggingface/datasets/pull/5996",
"diff_url": "https://github.com/huggingface/datasets/pull/5996.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5996.patch",
"merged_at": "2023-07-03T16:03:33"
} | true |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 36