url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5219/comments | https://api.github.com/repos/huggingface/datasets/issues/5219/events | https://github.com/huggingface/datasets/issues/5219 | 1,441,255,910 | I_kwDODunzps5V59Hm | 5,219 | Delta Tables usage using Datasets Library | {
"avatar_url": "https://avatars.githubusercontent.com/u/23002137?v=4",
"events_url": "https://api.github.com/users/reichenbch/events{/privacy}",
"followers_url": "https://api.github.com/users/reichenbch/followers",
"following_url": "https://api.github.com/users/reichenbch/following{/other_user}",
"gists_url": "https://api.github.com/users/reichenbch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/reichenbch",
"id": 23002137,
"login": "reichenbch",
"node_id": "MDQ6VXNlcjIzMDAyMTM3",
"organizations_url": "https://api.github.com/users/reichenbch/orgs",
"received_events_url": "https://api.github.com/users/reichenbch/received_events",
"repos_url": "https://api.github.com/users/reichenbch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/reichenbch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reichenbch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/reichenbch"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! Interesting :) Can you provide concrete examples of cases where it can be useful ?",
"Few example blogs and posts that might help on this - \r\n\r\n1. https://hevodata.com/learn/databricks-delta-tables/\r\n2. https://docs.databricks.com/delta/index.html\r\n\r\nBasically, we are looking at utility of Datasets library with Delta Lake Tables.\r\n",
"`datasets` can already read/write from parquet from/to a cloud storage using fsspec, if I understand correctly it's should be possible to load parquet files as delat lake tables no ? :) Or is there someting missing ?",
"@lhoestq Per my understanding, delta lake table is a bunch of paruqet files together with the meta to support ACID. For example file 1 contains v0.1 of record A while file 2 contains v0.2 of record A. I am assuming the Hugging face dataset would delegate the read/write delta table to 3rd party lib, maybe pyarrow. Correct me if I was wrong @reichenbch \r\n\r\nAnd I am assuming, people are asking the versioning of Hugging face datasets. But I am assuming Hugging face delegate this function to github and it is not the key requirement for Public Data set. It actually the key function of ML Ops, I am not sure whether hugging face would like expand to that area."
] | "2022-11-09T02:43:56Z" | "2023-03-02T19:29:12Z" | null | NONE | null | null | null | ### Feature request
Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well.
### Motivation
We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering.
This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose.
### Your contribution
Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns.
I have basic idea about Delta Live Tables, would brush it easily for this feature. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5219/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2441/comments | https://api.github.com/repos/huggingface/datasets/issues/2441/events | https://github.com/huggingface/datasets/issues/2441 | 908,554,713 | MDU6SXNzdWU5MDg1NTQ3MTM= | 2,441 | DuplicatedKeysError on personal dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22605313?v=4",
"events_url": "https://api.github.com/users/lucaguarro/events{/privacy}",
"followers_url": "https://api.github.com/users/lucaguarro/followers",
"following_url": "https://api.github.com/users/lucaguarro/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaguarro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucaguarro",
"id": 22605313,
"login": "lucaguarro",
"node_id": "MDQ6VXNlcjIyNjA1MzEz",
"organizations_url": "https://api.github.com/users/lucaguarro/orgs",
"received_events_url": "https://api.github.com/users/lucaguarro/received_events",
"repos_url": "https://api.github.com/users/lucaguarro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucaguarro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaguarro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucaguarro"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r\nYou can fix that by making sure that your keys are unique.\r\n\r\nFor example if you use a counter to define the key of each example, make sure that your counter is not reset to 0 in during examples generation (between two open files for examples).\r\n\r\nLet me know if you have other questions :)",
"Yup, I indeed was generating duplicate keys. Fixed it and now it's working."
] | "2021-06-01T17:59:41Z" | "2021-06-04T23:50:03Z" | "2021-06-04T23:50:03Z" | NONE | null | null | null | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.
## Steps to reproduce the bug
I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.
## Expected results
For my data to be loaded.
## Actual results
**DuplicatedKeysError** exception is raised
```
Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-c342ea0dae9d> in <module>()
----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
749 try_from_hf_gcs=try_from_hf_gcs,
750 base_path=base_path,
--> 751 use_auth_token=use_auth_token,
752 )
753
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
573 if not downloaded_from_gcs:
574 self._download_and_prepare(
--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
576 )
577 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
650 try:
651 # Prepare split will record examples associated to the split
--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)
653 except OSError as e:
654 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
990 writer.write(example, key)
991 finally:
--> 992 num_examples, num_bytes = writer.finalize()
993
994 split_generator.split_info.num_examples = num_examples
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream)
407 # In case current_examples < writer_batch_size, but user uses finalize()
408 if self._check_duplicates:
--> 409 self.check_duplicate_keys()
410 # Re-intializing to empty list for next batch
411 self.hkey_record = []
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2441/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4936/comments | https://api.github.com/repos/huggingface/datasets/issues/4936/events | https://github.com/huggingface/datasets/issues/4936 | 1,363,274,907 | I_kwDODunzps5RQeyb | 4,936 | vivos (Vietnamese speech corpus) dataset not accessible | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)",
"@cahya-wirawan omg this is awesome!! thank you! ",
"We have contacted the authors to ask them."
] | "2022-09-06T13:17:55Z" | "2022-09-21T06:06:02Z" | "2022-09-12T07:14:20Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4936/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1521/comments | https://api.github.com/repos/huggingface/datasets/issues/1521/events | https://github.com/huggingface/datasets/pull/1521 | 764,320,841 | MDExOlB1bGxSZXF1ZXN0NTM4NDQzOTgz | 1,521 | Atomic | {
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"events_url": "https://api.github.com/users/ontocord/events{/privacy}",
"followers_url": "https://api.github.com/users/ontocord/followers",
"following_url": "https://api.github.com/users/ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ontocord",
"id": 8900094,
"login": "ontocord",
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"organizations_url": "https://api.github.com/users/ontocord/orgs",
"received_events_url": "https://api.github.com/users/ontocord/received_events",
"repos_url": "https://api.github.com/users/ontocord/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ontocord/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ontocord"
} | [] | closed | false | null | [] | null | [
"I had to create a new PR to fix git errors. See: https://github.com/huggingface/datasets/pull/1525\r\n\r\nI'm closing this PR. "
] | "2020-12-12T20:18:08Z" | "2020-12-12T22:56:48Z" | "2020-12-12T22:56:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1521",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1521"
} | This is the ATOMIC common sense dataset. More info can be found here:
* README.md still to be created. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1521/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4464/comments | https://api.github.com/repos/huggingface/datasets/issues/4464/events | https://github.com/huggingface/datasets/pull/4464 | 1,265,682,931 | PR_kwDODunzps45XlWW | 4,464 | Extend support for streaming datasets that use xml.dom.minidom.parse | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-09T06:58:25Z" | "2022-06-09T08:43:24Z" | "2022-06-09T08:34:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4464.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4464",
"merged_at": "2022-06-09T08:34:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4464.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4464"
} | This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function.
This PR adds support for streaming datasets like "Yaxin/SemEval2015".
Fix #4453. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4464/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4464/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6370/comments | https://api.github.com/repos/huggingface/datasets/issues/6370/events | https://github.com/huggingface/datasets/issues/6370 | 1,972,073,909 | I_kwDODunzps51i3W1 | 6,370 | TensorDataset format does not work with Trainer from transformers | {
"avatar_url": "https://avatars.githubusercontent.com/u/49014051?v=4",
"events_url": "https://api.github.com/users/jinzzasol/events{/privacy}",
"followers_url": "https://api.github.com/users/jinzzasol/followers",
"following_url": "https://api.github.com/users/jinzzasol/following{/other_user}",
"gists_url": "https://api.github.com/users/jinzzasol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jinzzasol",
"id": 49014051,
"login": "jinzzasol",
"node_id": "MDQ6VXNlcjQ5MDE0MDUx",
"organizations_url": "https://api.github.com/users/jinzzasol/orgs",
"received_events_url": "https://api.github.com/users/jinzzasol/received_events",
"repos_url": "https://api.github.com/users/jinzzasol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jinzzasol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinzzasol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jinzzasol"
} | [] | closed | false | null | [] | null | [
"I figured it out. I found that `Trainer` does not work with TensorDataset even though the document says it uses it. Instead, I ended up creating a dictionary and converting it to a dataset using `dataset.Dataset.from_dict()`.\r\n\r\nI will leave this post open for a while. If someone knows a better approach, please leave a comment.",
"Only issues directly related to the HF datasets library should be reported here. ~So, I'm transferring this issue to the `transformers` repo.~ I'm not a `transformers` maintainer, so GitHub doesn't let me transfer it there :(. This means you need to do it manually."
] | "2023-11-01T10:09:54Z" | "2023-11-29T16:31:08Z" | "2023-11-29T16:31:08Z" | NONE | null | null | null | ### Describe the bug
The model was built to do fine tunning on BERT model for relation extraction.
trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset`
However, in the document, the required data format is `torch.utils.data.TensorDataset`.
![image](https://github.com/huggingface/datasets/assets/49014051/36fa34ac-3127-4c64-9580-9ab736136d83)
Transformers trainer is supposed to accept the train_dataset in the format of torch.utils.data.TensorDataset, but it returns error message *"TypeError: vars() argument must have __dict__ attribute"*
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-30-5df728c929a2> in <cell line: 1>()
----> 1 trainer.train()
2 trainer.evaluate(test_dataset)
9 frames
/usr/local/lib/python3.10/dist-packages/transformers/data/data_collator.py in <listcomp>(.0)
107
108 if not isinstance(features[0], Mapping):
--> 109 features = [vars(f) for f in features]
110 first = features[0]
111 batch = {}
TypeError: vars() argument must have __dict__ attribute
```
### Steps to reproduce the bug
Create train_dataset using `torch.utils.data.TensorDataset`, for instance,
```train_dataset = torch.utils.data.TensorDataset(train_input_ids, train_attention_masks, train_labels)```
Feed this `train_dataset` to your trainer and run trainer.train
```
trainer = Trainer(model,
training_args,
train_dataset=train_dataset,
eval_dataset=dev_dataset,
compute_metrics=compute_metrics,
)
```
### Expected behavior
Trainer should start training
### Environment info
It is running on Google Colab
- `datasets` version: 2.14.6
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6370/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6370/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2096/comments | https://api.github.com/repos/huggingface/datasets/issues/2096/events | https://github.com/huggingface/datasets/issues/2096 | 838,038,379 | MDU6SXNzdWU4MzgwMzgzNzk= | 2,096 | CoNLL 2003 dataset not including German | {
"avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4",
"events_url": "https://api.github.com/users/rxian/events{/privacy}",
"followers_url": "https://api.github.com/users/rxian/followers",
"following_url": "https://api.github.com/users/rxian/following{/other_user}",
"gists_url": "https://api.github.com/users/rxian/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rxian",
"id": 8406802,
"login": "rxian",
"node_id": "MDQ6VXNlcjg0MDY4MDI=",
"organizations_url": "https://api.github.com/users/rxian/orgs",
"received_events_url": "https://api.github.com/users/rxian/received_events",
"repos_url": "https://api.github.com/users/rxian/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxian/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rxian"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Hello. I've been looking for information about German Conll2003 and found your question. Official site (https://www.clips.uantwerpen.be/conll2003/ner/) mentions that organizers provide only annotation. German texts (ECI Multilingual Text Corpus) are not freely available and can be ordered from the Linguistic Data Consortium.\r\n\r\nBut maybe something has changed since 2003.",
"You can find the reason for not including the German data here: https://github.com/huggingface/datasets/issues/4230."
] | "2021-03-22T19:23:56Z" | "2023-07-25T16:49:07Z" | "2023-07-25T16:49:07Z" | NONE | null | null | null | Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!
I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of...
This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf).
## Adding a Dataset
- **Name:** CoNLL 2003 German
- **Paper:** https://www.aclweb.org/anthology/W03-0419/
- **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2096/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/65 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/65/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/65/comments | https://api.github.com/repos/huggingface/datasets/issues/65/events | https://github.com/huggingface/datasets/pull/65 | 614,746,516 | MDExOlB1bGxSZXF1ZXN0NDE1MjM4MDEw | 65 | fix math dataset and xcopa | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [] | "2020-05-08T13:33:55Z" | "2020-05-08T13:35:41Z" | "2020-05-08T13:35:40Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/65.diff",
"html_url": "https://github.com/huggingface/datasets/pull/65",
"merged_at": "2020-05-08T13:35:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/65.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/65"
} | - fixes math dataset and xcopa, uploaded both of the to S3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/65/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/65/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6003/comments | https://api.github.com/repos/huggingface/datasets/issues/6003/events | https://github.com/huggingface/datasets/issues/6003 | 1,786,554,110 | I_kwDODunzps5qfKb- | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4",
"events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}",
"followers_url": "https://api.github.com/users/PonteIneptique/followers",
"following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}",
"gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PonteIneptique",
"id": 1929830,
"login": "PonteIneptique",
"node_id": "MDQ6VXNlcjE5Mjk4MzA=",
"organizations_url": "https://api.github.com/users/PonteIneptique/orgs",
"received_events_url": "https://api.github.com/users/PonteIneptique/received_events",
"repos_url": "https://api.github.com/users/PonteIneptique/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PonteIneptique"
} | [] | open | false | null | [] | null | [] | "2023-07-03T17:15:31Z" | "2023-07-03T17:15:31Z" | null | NONE | null | null | null | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
Everytime, on step 19, I get
```pytb
File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
```
I tried:
- training without interleave on dataset 1, it runs
- training without interleave on dataset 2, it runs
- training without `.to_iterable_dataset()`, it hangs then crash
- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.
I might have coded something wrong, but I don't get what
### Steps to reproduce the bug
I have this function:
```py
def build_dataset(path: str, percent: str):
dataset = load_dataset(
"text",
data_files={"train": [path]},
split=f"train[{percent}]"
)
dataset = dataset.map(
lambda examples: tokenize(examples["text"]),
batched=True,
num_proc=num_proc,
)
dataset = dataset.map(
group_texts,
batched=True,
num_proc=num_proc,
desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}",
remove_columns=["text"]
)
print(len(dataset))
return dataset.to_iterable_dataset()
```
I hardcoded group_text:
```py
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // 512) * 512
# Split by chunks of max_len.
result = {
k: [t[i: i + 512] for i in range(0, total_length, 512)]
for k, t in concatenated_examples.items()
}
# result = {k: [el for el in elements if el] for k, elements in result.items()}
return result
```
And then I build datasets using the following code:
```py
train1 = build_dataset("d1.txt", ":95%")
train2 = build_dataset("d2.txt", ":95%")
dev1 = build_dataset("d1.txt", "95%:")
dev2 = build_dataset("d2.txt", "95%:")
```
and finally I run
```py
train_dataset = interleave_datasets(
[train1, train2],
probabilities=[0.8, 0.2],
seed=42
)
eval_dataset = interleave_datasets(
[dev1, dev2],
probabilities=[0.8, 0.2],
seed=42
)
```
Then I run the training part which remains mostly untouched:
> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16
### Expected behavior
The model should then train normally, but fails every time at the same step (19).
printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]
### Environment info
transformers[torch] 4.30.2
Ubuntu
A100 0 CUDA 12
Driver Version: 525.116.04 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6003/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6003/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6221/comments | https://api.github.com/repos/huggingface/datasets/issues/6221/events | https://github.com/huggingface/datasets/issues/6221 | 1,884,324,631 | I_kwDODunzps5wUIMX | 6,221 | Support saving datasets with custom formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"Not a fan of pickling this sort of stuff either.\r\nNote that users can also share the code in their dataset documentation."
] | "2023-09-06T16:03:32Z" | "2023-09-06T18:32:07Z" | null | CONTRIBUTOR | null | null | null | Requested in https://discuss.huggingface.co/t/using-set-transform-on-a-dataset-leads-to-an-exception/53036.
I am not sure if supporting this is the best idea for the following reasons:
>For this to work, we would have to pickle a custom transform, which means the transform and the objects it references need to be serializable. Also, deserializing these bytes would make `load_from_disk` unsafe, so I'm not sure this is a good idea.
@lhoestq WDYT?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6221/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4332/comments | https://api.github.com/repos/huggingface/datasets/issues/4332/events | https://github.com/huggingface/datasets/pull/4332 | 1,234,021,188 | PR_kwDODunzps43uO8S | 4,332 | Adding eval metadata for arabic speech corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [] | "2022-05-12T13:51:38Z" | "2022-05-12T21:03:21Z" | "2022-05-12T21:03:20Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4332",
"merged_at": "2022-05-12T21:03:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4332"
} | Adding eval metadata for arabic speech corpus | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4332/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3187/comments | https://api.github.com/repos/huggingface/datasets/issues/3187/events | https://github.com/huggingface/datasets/pull/3187 | 1,040,412,869 | PR_kwDODunzps4t44Ab | 3,187 | Add ChrF(++) (as implemented in sacrebleu) | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | closed | false | null | [] | null | [] | "2021-10-31T08:53:58Z" | "2021-11-02T14:50:50Z" | "2021-11-02T14:31:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3187.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3187",
"merged_at": "2021-11-02T14:31:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3187.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3187"
} | Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for yourself
```python
import datasets
EPSILON = 1e-4
chrf = datasets.load_metric(r"path\to\datasets\metrics\chrf")
test_cases = [
(["abcdefg"], ["hijklmnop"], 0.0),
(["a"], ["b"], 0.0),
([""], ["b"], 0.0),
([""], ["ref"], 0.0),
([""], ["reference"], 0.0),
(["aa"], ["ab"], 8.3333),
(["a", "b"], ["a", "c"], 8.3333),
(["a"], ["a"], 16.6667),
(["a b c"], ["a b c"], 50.0),
(["a b c"], ["abc"], 50.0),
([" risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."],
["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 63.361730),
([" Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich. "],
["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 64.1302698),
(["Niemand hat die Absicht, eine Mauer zu errichten"], ["Niemand hat die Absicht, eine Mauer zu errichten"], 100.0),
]
for hyp, ref, score in test_cases:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3, eps_smoothing=True)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
test_cases_effective_order = [
(["a"], ["a"], 100.0),
([""], ["reference"], 0.0),
(["a b c"], ["a b c"], 100.0),
(["a b c"], ["abc"], 100.0),
([""], ["c"], 0.0),
(["a", "b"], ["a", "c"], 50.0),
(["aa"], ["ab"], 25.0),
]
for hyp, ref, score in test_cases_effective_order:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3, eps_smoothing=False)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
test_cases_keep_whitespace = [
(
["Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich."],
["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."],
67.3481606,
),
(
["risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."],
["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."],
65.2414427,
),
]
for hyp, ref, score in test_cases_keep_whitespace:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3,
whitespace=True)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
predictions = ["The relationship between Obama and Netanyahu is not exactly friendly."]
references = [["The ties between Obama and Netanyahu are not particularly friendly."]]
print(chrf.compute(predictions=predictions, references=references))
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3187/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1140/comments | https://api.github.com/repos/huggingface/datasets/issues/1140/events | https://github.com/huggingface/datasets/pull/1140 | 757,399,142 | MDExOlB1bGxSZXF1ZXN0NTMyNzgyODc0 | 1,140 | Add Urdu Sentiment Corpus (USC). | {
"avatar_url": "https://avatars.githubusercontent.com/u/44389205?v=4",
"events_url": "https://api.github.com/users/chaitnayabasava/events{/privacy}",
"followers_url": "https://api.github.com/users/chaitnayabasava/followers",
"following_url": "https://api.github.com/users/chaitnayabasava/following{/other_user}",
"gists_url": "https://api.github.com/users/chaitnayabasava/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chaitnayabasava",
"id": 44389205,
"login": "chaitnayabasava",
"node_id": "MDQ6VXNlcjQ0Mzg5MjA1",
"organizations_url": "https://api.github.com/users/chaitnayabasava/orgs",
"received_events_url": "https://api.github.com/users/chaitnayabasava/received_events",
"repos_url": "https://api.github.com/users/chaitnayabasava/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chaitnayabasava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaitnayabasava/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chaitnayabasava"
} | [] | closed | false | null | [] | null | [
"@lhoestq have made the suggested changes in the README file.",
"@lhoestq Created a new PR #1231 with only the relevant files.\r\nclosing this one :)"
] | "2020-12-04T20:55:27Z" | "2020-12-07T03:27:23Z" | "2020-12-07T03:27:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1140",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1140"
} | Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1140/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4547/comments | https://api.github.com/repos/huggingface/datasets/issues/4547/events | https://github.com/huggingface/datasets/pull/4547 | 1,282,160,517 | PR_kwDODunzps46Ot5u | 4,547 | [CI] Fix some warnings | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR",
"good catch, I thought I resolved them all sorry",
"Alright it should be good now"
] | "2022-06-23T10:10:49Z" | "2022-06-28T14:10:57Z" | "2022-06-28T13:59:54Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4547",
"merged_at": "2022-06-28T13:59:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4547"
} | There are some warnings in the CI that are annoying, I tried to remove most of them | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4547/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4547/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/666/comments | https://api.github.com/repos/huggingface/datasets/issues/666/events | https://github.com/huggingface/datasets/issues/666 | 707,608,578 | MDU6SXNzdWU3MDc2MDg1Nzg= | 666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | {
"avatar_url": "https://avatars.githubusercontent.com/u/31090427?v=4",
"events_url": "https://api.github.com/users/wahab4114/events{/privacy}",
"followers_url": "https://api.github.com/users/wahab4114/followers",
"following_url": "https://api.github.com/users/wahab4114/following{/other_user}",
"gists_url": "https://api.github.com/users/wahab4114/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wahab4114",
"id": 31090427,
"login": "wahab4114",
"node_id": "MDQ6VXNlcjMxMDkwNDI3",
"organizations_url": "https://api.github.com/users/wahab4114/orgs",
"received_events_url": "https://api.github.com/users/wahab4114/received_events",
"repos_url": "https://api.github.com/users/wahab4114/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wahab4114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahab4114/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wahab4114"
} | [] | closed | false | null | [] | null | [
"No they are other similar copies but they are not provided by the official Bert models authors."
] | "2020-09-23T19:02:25Z" | "2020-10-27T15:19:25Z" | "2020-10-27T15:19:25Z" | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/666/timeline | null | completed | false |
|
https://api.github.com/repos/huggingface/datasets/issues/5559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5559/comments | https://api.github.com/repos/huggingface/datasets/issues/5559/events | https://github.com/huggingface/datasets/pull/5559 | 1,593,676,489 | PR_kwDODunzps5KcKSb | 5,559 | Fix map suffix_template | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011596 / 0.011353 (0.000244) | 0.005845 / 0.011008 (-0.005164) | 0.121302 / 0.038508 (0.082794) | 0.034306 / 0.023109 (0.011196) | 0.355973 / 0.275898 (0.080075) | 0.419903 / 0.323480 (0.096423) | 0.009049 / 0.007986 (0.001064) | 0.004245 / 0.004328 (-0.000084) | 0.092004 / 0.004250 (0.087753) | 0.042782 / 0.037052 (0.005730) | 0.355805 / 0.258489 (0.097316) | 0.407298 / 0.293841 (0.113457) | 0.052481 / 0.128546 (-0.076066) | 0.020880 / 0.075646 (-0.054766) | 0.379948 / 0.419271 (-0.039324) | 0.061337 / 0.043533 (0.017804) | 0.359829 / 0.255139 (0.104690) | 0.379244 / 0.283200 (0.096044) | 0.116692 / 0.141683 (-0.024990) | 1.733717 / 1.452155 (0.281562) | 1.700246 / 1.492716 (0.207530) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014622 / 0.018006 (-0.003384) | 0.518777 / 0.000490 (0.518288) | 0.004086 / 0.000200 (0.003886) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031208 / 0.037411 (-0.006204) | 0.143003 / 0.014526 (0.128477) | 0.132625 / 0.176557 (-0.043932) | 0.187681 / 0.737135 (-0.549455) | 0.136576 / 0.296338 (-0.159763) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.626516 / 0.215209 (0.411307) | 6.282558 / 2.077655 (4.204904) | 2.702686 / 1.504120 (1.198566) | 2.287445 / 1.541195 (0.746250) | 2.333014 / 1.468490 (0.864524) | 1.227815 / 4.584777 (-3.356962) | 5.545640 / 3.745712 (1.799928) | 4.953226 / 5.269862 (-0.316635) | 2.774549 / 4.565676 (-1.791128) | 0.145257 / 0.424275 (-0.279018) | 0.014887 / 0.007607 (0.007280) | 0.812226 / 0.226044 (0.586182) | 8.002727 / 2.268929 (5.733798) | 3.314852 / 55.444624 (-52.129773) | 2.602348 / 6.876477 (-4.274128) | 2.593511 / 2.142072 (0.451438) | 1.440498 / 4.805227 (-3.364730) | 0.254849 / 6.500664 (-6.245815) | 0.077020 / 0.075469 (0.001551) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.487633 / 1.841788 (-0.354155) | 17.385773 / 8.074308 (9.311465) | 21.775511 / 10.191392 (11.584118) | 0.273514 / 0.680424 (-0.406910) | 0.059644 / 0.534201 (-0.474557) | 0.578710 / 0.579283 (-0.000573) | 0.630221 / 0.434364 (0.195857) | 0.632089 / 0.540337 (0.091752) | 0.762367 / 1.386936 (-0.624569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009513 / 0.011353 (-0.001840) | 0.006009 / 0.011008 (-0.004999) | 0.087589 / 0.038508 (0.049081) | 0.037487 / 0.023109 (0.014378) | 0.397660 / 0.275898 (0.121762) | 0.474438 / 0.323480 (0.150958) | 0.007373 / 0.007986 (-0.000613) | 0.005839 / 0.004328 (0.001511) | 0.092759 / 0.004250 (0.088509) | 0.052128 / 0.037052 (0.015075) | 0.382378 / 0.258489 (0.123889) | 0.458244 / 0.293841 (0.164403) | 0.057232 / 0.128546 (-0.071314) | 0.020662 / 0.075646 (-0.054984) | 0.110314 / 0.419271 (-0.308957) | 0.063014 / 0.043533 (0.019481) | 0.386020 / 0.255139 (0.130881) | 0.476169 / 0.283200 (0.192970) | 0.118081 / 0.141683 (-0.023602) | 1.724158 / 1.452155 (0.272003) | 1.862257 / 1.492716 (0.369541) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224288 / 0.018006 (0.206281) | 0.523631 / 0.000490 (0.523141) | 0.004420 / 0.000200 (0.004220) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032359 / 0.037411 (-0.005052) | 0.140045 / 0.014526 (0.125519) | 0.138164 / 0.176557 (-0.038393) | 0.181068 / 0.737135 (-0.556067) | 0.143965 / 0.296338 (-0.152374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573809 / 0.215209 (0.358600) | 6.083247 / 2.077655 (4.005592) | 2.671258 / 1.504120 (1.167138) | 2.277062 / 1.541195 (0.735868) | 2.299544 / 1.468490 (0.831054) | 1.267351 / 4.584777 (-3.317425) | 5.494461 / 3.745712 (1.748749) | 5.083169 / 5.269862 (-0.186692) | 2.531738 / 4.565676 (-2.033938) | 0.151834 / 0.424275 (-0.272441) | 0.014123 / 0.007607 (0.006516) | 0.800222 / 0.226044 (0.574177) | 7.637624 / 2.268929 (5.368695) | 3.325574 / 55.444624 (-52.119050) | 2.563008 / 6.876477 (-4.313468) | 2.596259 / 2.142072 (0.454187) | 1.459206 / 4.805227 (-3.346021) | 0.237771 / 6.500664 (-6.262893) | 0.071854 / 0.075469 (-0.003615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.605504 / 1.841788 (-0.236284) | 17.593594 / 8.074308 (9.519285) | 20.618005 / 10.191392 (10.426612) | 0.270938 / 0.680424 (-0.409486) | 0.026205 / 0.534201 (-0.507996) | 0.562223 / 0.579283 (-0.017060) | 0.617571 / 0.434364 (0.183207) | 0.616398 / 0.540337 (0.076060) | 0.715293 / 1.386936 (-0.671643) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#673dc0dd7d063b2313f7adcc9e0be53d4718f5cf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013213 / 0.011353 (0.001860) | 0.006253 / 0.011008 (-0.004756) | 0.125175 / 0.038508 (0.086667) | 0.037491 / 0.023109 (0.014382) | 0.401379 / 0.275898 (0.125481) | 0.395826 / 0.323480 (0.072346) | 0.009224 / 0.007986 (0.001238) | 0.005163 / 0.004328 (0.000835) | 0.096490 / 0.004250 (0.092239) | 0.042473 / 0.037052 (0.005420) | 0.383713 / 0.258489 (0.125224) | 0.429234 / 0.293841 (0.135393) | 0.063261 / 0.128546 (-0.065285) | 0.020114 / 0.075646 (-0.055532) | 0.401687 / 0.419271 (-0.017585) | 0.062831 / 0.043533 (0.019298) | 0.405211 / 0.255139 (0.150072) | 0.380810 / 0.283200 (0.097610) | 0.109166 / 0.141683 (-0.032517) | 1.869580 / 1.452155 (0.417426) | 1.949947 / 1.492716 (0.457231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207481 / 0.018006 (0.189475) | 0.504161 / 0.000490 (0.503671) | 0.008429 / 0.000200 (0.008229) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029182 / 0.037411 (-0.008229) | 0.126284 / 0.014526 (0.111758) | 0.140381 / 0.176557 (-0.036175) | 0.175878 / 0.737135 (-0.561257) | 0.138824 / 0.296338 (-0.157514) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643658 / 0.215209 (0.428449) | 6.396224 / 2.077655 (4.318569) | 2.600702 / 1.504120 (1.096582) | 2.176721 / 1.541195 (0.635526) | 2.216116 / 1.468490 (0.747626) | 1.235069 / 4.584777 (-3.349708) | 5.457228 / 3.745712 (1.711516) | 3.060455 / 5.269862 (-2.209407) | 2.028123 / 4.565676 (-2.537554) | 0.141617 / 0.424275 (-0.282658) | 0.016596 / 0.007607 (0.008989) | 0.804915 / 0.226044 (0.578870) | 7.968821 / 2.268929 (5.699893) | 3.340650 / 55.444624 (-52.103974) | 2.533620 / 6.876477 (-4.342856) | 2.457388 / 2.142072 (0.315315) | 1.486527 / 4.805227 (-3.318700) | 0.253767 / 6.500664 (-6.246897) | 0.082192 / 0.075469 (0.006723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.470896 / 1.841788 (-0.370892) | 17.566637 / 8.074308 (9.492329) | 23.144148 / 10.191392 (12.952756) | 0.235510 / 0.680424 (-0.444913) | 0.046051 / 0.534201 (-0.488150) | 0.559954 / 0.579283 (-0.019329) | 0.645390 / 0.434364 (0.211026) | 0.690983 / 0.540337 (0.150646) | 0.776252 / 1.386936 (-0.610684) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010564 / 0.011353 (-0.000789) | 0.006150 / 0.011008 (-0.004858) | 0.100030 / 0.038508 (0.061522) | 0.036873 / 0.023109 (0.013764) | 0.448508 / 0.275898 (0.172610) | 0.492593 / 0.323480 (0.169113) | 0.007337 / 0.007986 (-0.000648) | 0.004804 / 0.004328 (0.000475) | 0.099218 / 0.004250 (0.094967) | 0.055513 / 0.037052 (0.018461) | 0.462147 / 0.258489 (0.203658) | 0.510229 / 0.293841 (0.216388) | 0.055307 / 0.128546 (-0.073239) | 0.021989 / 0.075646 (-0.053657) | 0.118487 / 0.419271 (-0.300785) | 0.071752 / 0.043533 (0.028219) | 0.456572 / 0.255139 (0.201433) | 0.475160 / 0.283200 (0.191961) | 0.117472 / 0.141683 (-0.024211) | 1.813212 / 1.452155 (0.361058) | 1.908413 / 1.492716 (0.415696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.352929 / 0.018006 (0.334923) | 0.543874 / 0.000490 (0.543384) | 0.078529 / 0.000200 (0.078329) | 0.000669 / 0.000054 (0.000614) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033157 / 0.037411 (-0.004254) | 0.162503 / 0.014526 (0.147977) | 0.146424 / 0.176557 (-0.030132) | 0.201781 / 0.737135 (-0.535354) | 0.168110 / 0.296338 (-0.128229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.644205 / 0.215209 (0.428996) | 6.327519 / 2.077655 (4.249865) | 2.728102 / 1.504120 (1.223982) | 2.306426 / 1.541195 (0.765232) | 2.373125 / 1.468490 (0.904635) | 1.350649 / 4.584777 (-3.234128) | 5.652714 / 3.745712 (1.907002) | 3.175335 / 5.269862 (-2.094526) | 2.222902 / 4.565676 (-2.342775) | 0.160609 / 0.424275 (-0.263666) | 0.015596 / 0.007607 (0.007989) | 0.790357 / 0.226044 (0.564313) | 8.289758 / 2.268929 (6.020830) | 3.479215 / 55.444624 (-51.965410) | 2.860063 / 6.876477 (-4.016413) | 2.806720 / 2.142072 (0.664648) | 1.639046 / 4.805227 (-3.166181) | 0.267017 / 6.500664 (-6.233648) | 0.083990 / 0.075469 (0.008521) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632262 / 1.841788 (-0.209525) | 17.794357 / 8.074308 (9.720049) | 21.203547 / 10.191392 (11.012155) | 0.250899 / 0.680424 (-0.429525) | 0.024502 / 0.534201 (-0.509699) | 0.519960 / 0.579283 (-0.059323) | 0.615412 / 0.434364 (0.181048) | 0.641914 / 0.540337 (0.101577) | 0.772355 / 1.386936 (-0.614581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32cc4d10243b0feb69650f007d010971fd861dc1 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009501 / 0.011353 (-0.001852) | 0.005262 / 0.011008 (-0.005747) | 0.100809 / 0.038508 (0.062301) | 0.036601 / 0.023109 (0.013492) | 0.299612 / 0.275898 (0.023714) | 0.366970 / 0.323480 (0.043490) | 0.007879 / 0.007986 (-0.000107) | 0.004216 / 0.004328 (-0.000113) | 0.076749 / 0.004250 (0.072498) | 0.042081 / 0.037052 (0.005029) | 0.299572 / 0.258489 (0.041083) | 0.339687 / 0.293841 (0.045846) | 0.038706 / 0.128546 (-0.089840) | 0.012295 / 0.075646 (-0.063352) | 0.336172 / 0.419271 (-0.083100) | 0.047524 / 0.043533 (0.003992) | 0.296800 / 0.255139 (0.041661) | 0.331592 / 0.283200 (0.048393) | 0.101191 / 0.141683 (-0.040491) | 1.486200 / 1.452155 (0.034046) | 1.509955 / 1.492716 (0.017239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204735 / 0.018006 (0.186728) | 0.446381 / 0.000490 (0.445891) | 0.005177 / 0.000200 (0.004977) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028655 / 0.037411 (-0.008756) | 0.116559 / 0.014526 (0.102033) | 0.122551 / 0.176557 (-0.054006) | 0.189764 / 0.737135 (-0.547372) | 0.126446 / 0.296338 (-0.169892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400104 / 0.215209 (0.184895) | 4.001524 / 2.077655 (1.923869) | 1.779267 / 1.504120 (0.275147) | 1.580168 / 1.541195 (0.038974) | 1.684100 / 1.468490 (0.215610) | 0.703354 / 4.584777 (-3.881423) | 3.828131 / 3.745712 (0.082419) | 2.098500 / 5.269862 (-3.171362) | 1.331161 / 4.565676 (-3.234516) | 0.085417 / 0.424275 (-0.338858) | 0.012380 / 0.007607 (0.004772) | 0.504189 / 0.226044 (0.278144) | 5.094672 / 2.268929 (2.825743) | 2.264352 / 55.444624 (-53.180272) | 1.909573 / 6.876477 (-4.966904) | 2.005425 / 2.142072 (-0.136648) | 0.840893 / 4.805227 (-3.964335) | 0.164689 / 6.500664 (-6.335975) | 0.062754 / 0.075469 (-0.012715) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250001 / 1.841788 (-0.591786) | 14.993313 / 8.074308 (6.919005) | 14.880601 / 10.191392 (4.689209) | 0.175141 / 0.680424 (-0.505283) | 0.028952 / 0.534201 (-0.505249) | 0.447073 / 0.579283 (-0.132210) | 0.445993 / 0.434364 (0.011629) | 0.525527 / 0.540337 (-0.014811) | 0.613156 / 1.386936 (-0.773780) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007796 / 0.011353 (-0.003557) | 0.005399 / 0.011008 (-0.005609) | 0.078240 / 0.038508 (0.039732) | 0.035303 / 0.023109 (0.012193) | 0.364603 / 0.275898 (0.088705) | 0.400794 / 0.323480 (0.077314) | 0.006152 / 0.007986 (-0.001834) | 0.004324 / 0.004328 (-0.000004) | 0.074949 / 0.004250 (0.070698) | 0.051939 / 0.037052 (0.014887) | 0.377079 / 0.258489 (0.118590) | 0.413630 / 0.293841 (0.119789) | 0.037567 / 0.128546 (-0.090979) | 0.012793 / 0.075646 (-0.062854) | 0.089013 / 0.419271 (-0.330258) | 0.050748 / 0.043533 (0.007215) | 0.370100 / 0.255139 (0.114961) | 0.384838 / 0.283200 (0.101638) | 0.105840 / 0.141683 (-0.035843) | 1.476490 / 1.452155 (0.024335) | 1.544688 / 1.492716 (0.051972) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220987 / 0.018006 (0.202981) | 0.443801 / 0.000490 (0.443311) | 0.005747 / 0.000200 (0.005547) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030187 / 0.037411 (-0.007225) | 0.118230 / 0.014526 (0.103704) | 0.126810 / 0.176557 (-0.049746) | 0.200482 / 0.737135 (-0.536654) | 0.130831 / 0.296338 (-0.165507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423231 / 0.215209 (0.208022) | 4.196576 / 2.077655 (2.118921) | 1.992919 / 1.504120 (0.488799) | 1.809172 / 1.541195 (0.267977) | 1.932706 / 1.468490 (0.464216) | 0.727319 / 4.584777 (-3.857458) | 3.833295 / 3.745712 (0.087583) | 3.527005 / 5.269862 (-1.742857) | 1.937348 / 4.565676 (-2.628329) | 0.088713 / 0.424275 (-0.335562) | 0.012711 / 0.007607 (0.005104) | 0.531385 / 0.226044 (0.305341) | 5.308051 / 2.268929 (3.039123) | 2.493494 / 55.444624 (-52.951131) | 2.168359 / 6.876477 (-4.708118) | 2.258160 / 2.142072 (0.116088) | 0.865629 / 4.805227 (-3.939598) | 0.171281 / 6.500664 (-6.329383) | 0.065746 / 0.075469 (-0.009723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290378 / 1.841788 (-0.551409) | 15.900804 / 8.074308 (7.826496) | 14.809614 / 10.191392 (4.618222) | 0.177287 / 0.680424 (-0.503137) | 0.017875 / 0.534201 (-0.516326) | 0.429646 / 0.579283 (-0.149637) | 0.451646 / 0.434364 (0.017282) | 0.545669 / 0.540337 (0.005332) | 0.633215 / 1.386936 (-0.753721) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2c67b5f4bc9cea088e977a135644d38da8c144ff \"CML watermark\")\n"
] | "2023-02-21T15:26:26Z" | "2023-02-21T17:21:37Z" | "2023-02-21T17:14:29Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5559",
"merged_at": "2023-02-21T17:14:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5559"
} | #5455 introduced a small bug that lead `map` to ignore the `suffix_template` argument and not put suffixes to cached files in multiprocessing.
I fixed this and also improved a few things:
- regarding logging: "Loading cached processed dataset" is now logged only once even in multiprocessing (it used to be logged `num_proc` times)
- regarding new_fingerprint: I made sure that the returned dataset satisfies `ds._fingerprint==new_fingerprint` if `new_fingerprint` is passed to `map` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5559/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5559/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5762/comments | https://api.github.com/repos/huggingface/datasets/issues/5762/events | https://github.com/huggingface/datasets/issues/5762 | 1,670,326,470 | I_kwDODunzps5jjyjG | 5,762 | Not able to load the pile | {
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/surya-narayanan",
"id": 17240858,
"login": "surya-narayanan",
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/surya-narayanan"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!"
] | "2023-04-17T03:09:10Z" | "2023-04-17T09:37:27Z" | "2023-04-17T09:37:27Z" | NONE | null | null | null | ### Describe the bug
Got this error when I am trying to load the pile dataset
```
TypeError: Couldn't cast array of type
struct<file: string, id: string>
to
{'id': Value(dtype='string', id=None)}
```
### Steps to reproduce the bug
Please visit the following sample notebook
https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB
### Expected behavior
The pile should work
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5762/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3004/comments | https://api.github.com/repos/huggingface/datasets/issues/3004/events | https://github.com/huggingface/datasets/pull/3004 | 1,014,336,617 | PR_kwDODunzps4smfPF | 3,004 | LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. | {
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliaschalkidis",
"id": 1626984,
"login": "iliaschalkidis",
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliaschalkidis"
} | [] | closed | false | null | [] | null | [
"Please wait until Tuesday. Arxiv pre-print is pending. 🤗 ",
"Hi @lhoestq, I updated the README with the Arxiv publication info and now the tests are not passing.\r\n\r\nIt seems that the error is completely irrelevant to my code:\r\n\r\n```\r\n Attempting uninstall: ruamel.yaml\r\n Found existing installation: ruamel-yaml 0.15.87\r\nERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.\r\n```",
"Hi ! Thanks for adding this one. Amazing work :o \r\n\r\nYea you can ignore the `ruamel-yaml` error, it's unrelated to your PR and fixed on `master`",
"Hi @lhoestq, \r\n\r\n- I fetched and merged the upstream master to get the `ruamel-yaml` fix.\r\n- I updated the README.md file including more information for the inputs and labels, while I also turned all tables in HTML format hoping that they will render nicely in the dataset card in the HF website.\r\n- I also simplified the CaseHOLD input, excl. the unused `question` field and the `context` replicas, as suggested.\r\n"
] | "2021-10-03T10:03:25Z" | "2021-10-13T13:37:02Z" | "2021-10-13T13:37:01Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3004",
"merged_at": "2021-10-13T13:37:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3004"
} | Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a benchmark dataset to evaluate the performance of NLP methods in legal tasks. LexGLUE is based on seven existing legal NLP datasets, selected using criteria largely from SuperGLUE.
As in GLUE and SuperGLUE (Wang et al., 2019b,a), one of our goals is to push towards generic (or ‘foundation’) models that can cope with multiple NLP tasks, in our case legal NLP tasks possibly with limited task-specific fine-tuning. Another goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legalNLP. Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways to make it easier for newcomers and generic models to address all tasks.
LexGLUE benchmark is accompanied by experimental infrastructure that relies on Hugging Face Transformers library and resides at: https://github.com/coastalcph/lex-glue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3004/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/440/comments | https://api.github.com/repos/huggingface/datasets/issues/440/events | https://github.com/huggingface/datasets/pull/440 | 666,116,823 | MDExOlB1bGxSZXF1ZXN0NDU3MDE2MjQy | 440 | Fix user specified features in map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2020-07-27T09:04:26Z" | "2020-07-28T09:25:23Z" | "2020-07-28T09:25:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/440.diff",
"html_url": "https://github.com/huggingface/datasets/pull/440",
"merged_at": "2020-07-28T09:25:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/440.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/440"
} | `.map` didn't keep the user specified features because of an issue in the writer.
The writer used to overwrite the user specified features with inferred features.
I also added tests to make sure it doesn't happen again. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/440/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/440/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6319/comments | https://api.github.com/repos/huggingface/datasets/issues/6319/events | https://github.com/huggingface/datasets/issues/6319 | 1,952,101,717 | I_kwDODunzps50WrVV | 6,319 | Datasets.map is severely broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phalexo",
"id": 4603365,
"login": "phalexo",
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"repos_url": "https://api.github.com/users/phalexo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phalexo"
} | [] | open | false | null | [] | null | [
"Hi! Instead of processing a single example at a time, you should use the batched `map` for the best performance (with `num_proc=1`) - the fast tokenizers can process a batch's samples in parallel in that scenario.\r\n\r\nE.g., the following code in Colab takes an hour to complete:\r\n```python\r\n# !pip install datasets transformers\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"]), batched=True, remove_columns=[\"text\", \"meta\"])\r\n```",
"Batched is far worse. A single batch of 1000 took hours and that was only 1%\r\n\r\n\r\nOn Thu, Oct 19, 2023, 2:26 PM Mario Šaško ***@***.***> wrote:\r\n\r\n> Hi! You should use the batched map for the best performance (with\r\n> num_proc=1) - the fast tokenizers can process a batch's samples in\r\n> parallel.\r\n>\r\n> E.g., the following code in Colab takes an hour to complete:\r\n>\r\n> # !pip install datasets transformersfrom datasets import load_datasetfrom transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"]), batched=True, remove_columns=[\"text\", \"meta\"])\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771503757>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABDD3ZJHPSRVDEXFNMXR2N3YAFWFZAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDGNZVG4>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Can you please provide a self-contained reproducer?",
"Which specific version of datasets are you using?\r\n\r\nWhat is the architecture of your colab setup? Ram? Cores? OS?\r\n\r\n\r\nOn Thu, Oct 19, 2023, 2:27 PM pensive introvert ***@***.***>\r\nwrote:\r\n\r\n> Batched is far worse. A single batch of 1000 took hours and that was only\r\n> 1%\r\n>\r\n>\r\n> On Thu, Oct 19, 2023, 2:26 PM Mario Šaško ***@***.***>\r\n> wrote:\r\n>\r\n>> Hi! You should use the batched map for the best performance (with\r\n>> num_proc=1) - the fast tokenizers can process a batch's samples in\r\n>> parallel.\r\n>>\r\n>> E.g., the following code in Colab takes an hour to complete:\r\n>>\r\n>> # !pip install datasets transformersfrom datasets import load_datasetfrom transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"]), batched=True, remove_columns=[\"text\", \"meta\"])\r\n>>\r\n>> —\r\n>> Reply to this email directly, view it on GitHub\r\n>> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771503757>,\r\n>> or unsubscribe\r\n>> <https://github.com/notifications/unsubscribe-auth/ABDD3ZJHPSRVDEXFNMXR2N3YAFWFZAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDGNZVG4>\r\n>> .\r\n>> You are receiving this because you authored the thread.Message ID:\r\n>> ***@***.***>\r\n>>\r\n>\r\n",
"from functools import partial\r\nimport transformers\r\nfrom datasets import load_dataset, concatenate_datasets, load_from_disk\r\n\r\nmodel_name_or_path=\"/opt/data/data/daryl149/llama-2-7b-chat-hf\"\r\noutput_dir=\"/opt/data/data/LongLoRA/checkpoints\"\r\ncache_dir=\"/opt/data/data/LongLoRA/cache\"\r\nmodel_max_length=16384\r\n\r\nIGNORE_INDEX = -100\r\nDEFAULT_PAD_TOKEN = \"[PAD]\"\r\nDEFAULT_EOS_TOKEN = \"</s>\"\r\nDEFAULT_BOS_TOKEN = \"<s>\"\r\nDEFAULT_UNK_TOKEN = \"<unk>\"\r\n\r\n\r\ntokenizer = transformers.LlamaTokenizerFast.from_pretrained(\r\n model_name_or_path,\r\n cache_dir=cache_dir,\r\n model_max_length=model_max_length,\r\n padding_side=\"right\",\r\n use_fast=True,\r\n #use_fast=False\r\n)\r\n\r\nspecial_tokens_dict = dict()\r\nif tokenizer.pad_token is None:\r\n special_tokens_dict[\"pad_token\"] = DEFAULT_PAD_TOKEN\r\nif tokenizer.eos_token is None:\r\n special_tokens_dict[\"eos_token\"] = DEFAULT_EOS_TOKEN\r\nif tokenizer.bos_token is None:\r\n special_tokens_dict[\"bos_token\"] = DEFAULT_BOS_TOKEN\r\nif tokenizer.unk_token is None:\r\n special_tokens_dict[\"unk_token\"] = DEFAULT_UNK_TOKEN\r\n\r\ntokenizer.add_special_tokens(special_tokens_dict)\r\n\r\ndef tokenize_fn(tokenizer, example):\r\n context_length = tokenizer.model_max_length\r\n outputs = tokenizer(\r\n tokenizer.eos_token.join(example[\"text\"]),\r\n #truncation=False,\r\n truncation=True,\r\n return_tensors=\"pt\",\r\n #return_tensors=\"np\",\r\n pad_to_multiple_of=context_length,\r\n padding=True,\r\n )\r\n return {\"input_ids\": outputs[\"input_ids\"].view(-1, context_length)}\r\n\r\nfor idx in range(100):\r\n dataset = load_dataset(\"togethercomputer/RedPajama-Data-1T-Sample\",\r\ncache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')\r\n dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,\r\nnum_proc=16, remove_columns=[\"text\", \"meta\"])\r\n dataset.save_to_disk(training_args.cache_dir + f\"/training_data_{idx}\")\r\n\r\n\r\nOn Thu, Oct 19, 2023 at 2:30 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Can you please provide a self-contained reproducer?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"I changed the tokenizer to one without \"Fast suffix, and something changed.\r\nThe fraction, although still slowed a lot at 80% was able to get over the\r\nfinish line of 100%\r\n\r\nI have to do more testng, see if the whole set can be processed\r\n\r\n\r\n\r\nOn Thu, Oct 19, 2023 at 3:03 PM pensive introvert <\r\n***@***.***> wrote:\r\n\r\n> from functools import partial\r\n> import transformers\r\n> from datasets import load_dataset, concatenate_datasets, load_from_disk\r\n>\r\n> model_name_or_path=\"/opt/data/data/daryl149/llama-2-7b-chat-hf\"\r\n> output_dir=\"/opt/data/data/LongLoRA/checkpoints\"\r\n> cache_dir=\"/opt/data/data/LongLoRA/cache\"\r\n> model_max_length=16384\r\n>\r\n> IGNORE_INDEX = -100\r\n> DEFAULT_PAD_TOKEN = \"[PAD]\"\r\n> DEFAULT_EOS_TOKEN = \"</s>\"\r\n> DEFAULT_BOS_TOKEN = \"<s>\"\r\n> DEFAULT_UNK_TOKEN = \"<unk>\"\r\n>\r\n>\r\n> tokenizer = transformers.LlamaTokenizerFast.from_pretrained(\r\n> model_name_or_path,\r\n> cache_dir=cache_dir,\r\n> model_max_length=model_max_length,\r\n> padding_side=\"right\",\r\n> use_fast=True,\r\n> #use_fast=False\r\n> )\r\n>\r\n> special_tokens_dict = dict()\r\n> if tokenizer.pad_token is None:\r\n> special_tokens_dict[\"pad_token\"] = DEFAULT_PAD_TOKEN\r\n> if tokenizer.eos_token is None:\r\n> special_tokens_dict[\"eos_token\"] = DEFAULT_EOS_TOKEN\r\n> if tokenizer.bos_token is None:\r\n> special_tokens_dict[\"bos_token\"] = DEFAULT_BOS_TOKEN\r\n> if tokenizer.unk_token is None:\r\n> special_tokens_dict[\"unk_token\"] = DEFAULT_UNK_TOKEN\r\n>\r\n> tokenizer.add_special_tokens(special_tokens_dict)\r\n>\r\n> def tokenize_fn(tokenizer, example):\r\n> context_length = tokenizer.model_max_length\r\n> outputs = tokenizer(\r\n> tokenizer.eos_token.join(example[\"text\"]),\r\n> #truncation=False,\r\n> truncation=True,\r\n> return_tensors=\"pt\",\r\n> #return_tensors=\"np\",\r\n> pad_to_multiple_of=context_length,\r\n> padding=True,\r\n> )\r\n> return {\"input_ids\": outputs[\"input_ids\"].view(-1, context_length)}\r\n>\r\n> for idx in range(100):\r\n> dataset = load_dataset(\"togethercomputer/RedPajama-Data-1T-Sample\",\r\n> cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')\r\n> dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,\r\n> num_proc=16, remove_columns=[\"text\", \"meta\"])\r\n> dataset.save_to_disk(training_args.cache_dir + f\"/training_data_{idx}\")\r\n>\r\n>\r\n> On Thu, Oct 19, 2023 at 2:30 PM Mario Šaško ***@***.***>\r\n> wrote:\r\n>\r\n>> Can you please provide a self-contained reproducer?\r\n>>\r\n>> —\r\n>> Reply to this email directly, view it on GitHub\r\n>> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229>,\r\n>> or unsubscribe\r\n>> <https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE>\r\n>> .\r\n>> You are receiving this because you authored the thread.Message ID:\r\n>> ***@***.***>\r\n>>\r\n>\r\n",
"So, using LlamaTokenizerFast was the problem. Changing it to LlamaTokenizer\r\nfixed things,\r\n\r\nOn Thu, Oct 19, 2023 at 4:04 PM pensive introvert <\r\n***@***.***> wrote:\r\n\r\n> I changed the tokenizer to one without \"Fast suffix, and something\r\n> changed. The fraction, although still slowed a lot at 80% was able to get\r\n> over the finish line of 100%\r\n>\r\n> I have to do more testng, see if the whole set can be processed\r\n>\r\n>\r\n>\r\n> On Thu, Oct 19, 2023 at 3:03 PM pensive introvert <\r\n> ***@***.***> wrote:\r\n>\r\n>> from functools import partial\r\n>> import transformers\r\n>> from datasets import load_dataset, concatenate_datasets, load_from_disk\r\n>>\r\n>> model_name_or_path=\"/opt/data/data/daryl149/llama-2-7b-chat-hf\"\r\n>> output_dir=\"/opt/data/data/LongLoRA/checkpoints\"\r\n>> cache_dir=\"/opt/data/data/LongLoRA/cache\"\r\n>> model_max_length=16384\r\n>>\r\n>> IGNORE_INDEX = -100\r\n>> DEFAULT_PAD_TOKEN = \"[PAD]\"\r\n>> DEFAULT_EOS_TOKEN = \"</s>\"\r\n>> DEFAULT_BOS_TOKEN = \"<s>\"\r\n>> DEFAULT_UNK_TOKEN = \"<unk>\"\r\n>>\r\n>>\r\n>> tokenizer = transformers.LlamaTokenizerFast.from_pretrained(\r\n>> model_name_or_path,\r\n>> cache_dir=cache_dir,\r\n>> model_max_length=model_max_length,\r\n>> padding_side=\"right\",\r\n>> use_fast=True,\r\n>> #use_fast=False\r\n>> )\r\n>>\r\n>> special_tokens_dict = dict()\r\n>> if tokenizer.pad_token is None:\r\n>> special_tokens_dict[\"pad_token\"] = DEFAULT_PAD_TOKEN\r\n>> if tokenizer.eos_token is None:\r\n>> special_tokens_dict[\"eos_token\"] = DEFAULT_EOS_TOKEN\r\n>> if tokenizer.bos_token is None:\r\n>> special_tokens_dict[\"bos_token\"] = DEFAULT_BOS_TOKEN\r\n>> if tokenizer.unk_token is None:\r\n>> special_tokens_dict[\"unk_token\"] = DEFAULT_UNK_TOKEN\r\n>>\r\n>> tokenizer.add_special_tokens(special_tokens_dict)\r\n>>\r\n>> def tokenize_fn(tokenizer, example):\r\n>> context_length = tokenizer.model_max_length\r\n>> outputs = tokenizer(\r\n>> tokenizer.eos_token.join(example[\"text\"]),\r\n>> #truncation=False,\r\n>> truncation=True,\r\n>> return_tensors=\"pt\",\r\n>> #return_tensors=\"np\",\r\n>> pad_to_multiple_of=context_length,\r\n>> padding=True,\r\n>> )\r\n>> return {\"input_ids\": outputs[\"input_ids\"].view(-1, context_length)}\r\n>>\r\n>> for idx in range(100):\r\n>> dataset = load_dataset(\"togethercomputer/RedPajama-Data-1T-Sample\",\r\n>> cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')\r\n>> dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,\r\n>> num_proc=16, remove_columns=[\"text\", \"meta\"])\r\n>> dataset.save_to_disk(training_args.cache_dir +\r\n>> f\"/training_data_{idx}\")\r\n>>\r\n>>\r\n>> On Thu, Oct 19, 2023 at 2:30 PM Mario Šaško ***@***.***>\r\n>> wrote:\r\n>>\r\n>>> Can you please provide a self-contained reproducer?\r\n>>>\r\n>>> —\r\n>>> Reply to this email directly, view it on GitHub\r\n>>> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229>,\r\n>>> or unsubscribe\r\n>>> <https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE>\r\n>>> .\r\n>>> You are receiving this because you authored the thread.Message ID:\r\n>>> ***@***.***>\r\n>>>\r\n>>\r\n",
"Indeed, the tokenizer is super slow. Perhaps @ArthurZucker knows the reason why.\r\n\r\n([This](https://colab.research.google.com/drive/1VgeurX-4Fl2X6aBQTwh_X4kuQKZ6K9L1?usp=sharing) simplified Colab can be used to reproduce the behavior)",
"same issue here\r\nsample to reproduce: https://github.com/philschmid/document-ai-transformers/blob/main/training/donut_sroie.ipynb\r\nwith following map line\r\nhttps://github.com/philschmid/document-ai-transformers/blob/main/training/donut_sroie.ipynb\r\n\r\nIf I directly iterate over the dataset and call the mapping method, it is very fast\r\n```py\r\nfor sample in dataset:\r\n def preprocess_documents_for_donut(sample):\r\n```\r\n\r\nif i removed `.convert('RGB')` It can run to completion without getting stuck. I suspect it has something to do with the Image.\r\n\r\nIf I use batch, it's even slower.",
"@ewfian \r\n\r\n> If I directly iterate over the dataset and call the mapping method, it is very fast\r\n\r\n`Dataset.map` must also convert the images into bytes to write them to an Arrow file (the write itself takes some time, too). \r\n\r\nYou can make the `map` faster by manually converting the images into an \"arrow-compatible\" representation. Otherwise, the Pillow defaults are used when saving an image, which seems particularly slow for the notebook's case.\r\n\r\n```python\r\ndef preprocess_documents_for_donut(sample):\r\n text = json.loads(sample[\"text\"])\r\n d_doc = task_start_token + json2token(text) + eos_token\r\n image = sample[\"image\"].convert('RGB')\r\n # convert image to bytes\r\n buffer = io.BytesIO()\r\n image.save(buffer, format=\"PNG\", compress_level=1)\r\n return {\"image\": {\"bytes\": buffer.getvalue()}, \"text\": d_doc}\r\n\r\nproc_dataset = dataset.map(preprocess_documents_for_donut, writer_batch_size=50)\r\n```",
"The problem I had was to do with map using fork and copying locks from the\r\nparent process in acquired state. I ended up changing the context to use\r\nforkserver instead.\r\n\r\n\r\nOn Wed, Nov 29, 2023, 10:04 PM Mario Šaško ***@***.***> wrote:\r\n\r\n> @ewfian <https://github.com/ewfian>\r\n>\r\n> If I directly iterate over the dataset and call the mapping method, it is\r\n> very fast\r\n>\r\n> Dataset.map must also convert the images into bytes to write them to an\r\n> Arrow file (the write itself takes some time, too).\r\n>\r\n> You can make the map faster by manually converting the images into an\r\n> \"arrow-compatible\" representation. Otherwise, the Pillow defaults are used\r\n> when saving an image, which seems particularly slow for the notebook's case.\r\n>\r\n> def preprocess_documents_for_donut(sample):\r\n> text = json.loads(sample[\"text\"])\r\n> d_doc = task_start_token + json2token(text) + eos_token\r\n> image = sample[\"image\"].convert('RGB')\r\n> # convert image to bytes\r\n> buffer = io.BytesIO()\r\n> image.save(buffer, format=\"PNG\", compress_level=1)\r\n> return {\"image\": {\"bytes\": buffer.getvalue()}, \"text\": d_doc}\r\n> proc_dataset = dataset.map(preprocess_documents_for_donut, writer_batch_size=50)\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6319#issuecomment-1833033973>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABDD3ZKKEKJVWBFH7QHLRJ3YG7ZUJAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZTGAZTGOJXGM>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | "2023-10-19T12:19:33Z" | "2023-11-30T03:27:26Z" | null | NONE | null | null | null | ### Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing something, but it never ends.
I saw some comments about fast tokenizers using Rust and all and tried different variations. NOTHING works.
### Steps to reproduce the bug
Running it without breaking the dataset into parts results in the same behavior. The loop was an attempt to see if this was a RAM issue.
for idx in range(100):
dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')
dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False, num_proc=1, remove_columns=["text", "meta"])
dataset.save_to_disk(training_args.cache_dir + f"/training_data_{idx}")
### Expected behavior
I expect map to run at more or less the same speed it starts with and FINISH its processing.
### Environment info
Python 3.8, same with 3.10 makes no difference.
Ubuntu 20.04, | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6319/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/235/comments | https://api.github.com/repos/huggingface/datasets/issues/235/events | https://github.com/huggingface/datasets/pull/235 | 630,952,297 | MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0 | 235 | Add experimental datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [
"I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so I don't see why we require a new dataset folder\r\n\r\n- I'm not a big fan of adding a boolean flag to the `load_dataset()` function that basically switches between folder names on S3. The user has to know whether a dataset script is experimental or not. User installing nlp with pip won't see that there are folders called `datasets` and `datasets_experimental`\r\n\r\n- If we do this just to bypass the test, I think a good solution could be: For all tests that are too complicated to be currently tested with the testing framework, we can add a class variable called `do_test = False` to the dataset builder class and a default `do_test = True` to the abstract dataset class and skip all tests that have that variable in the dataset test framework similar to what is done to beam datasets: https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/tests/test_dataset_common.py#L79 \r\nWe can also print a warning for all dataset tests having `do_test = False`. This variable would only concern testing and we would not have a problem removing it at a later stage IMO.\r\n\r\n- This way the datascripts are backward compatible and can be used with earlier versions of `nlp` (not that this matters too much atm) \r\n\r\nWhat is your opinion on this @lhoestq @thomwolf ?",
"Very cool to have add those datasets :)\r\nI understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n\r\nI like the idea of the `do_tests=False` class variable. \r\nHowever it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n\r\nIf we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.",
"Yeah I really like the idea of a partial test.\r\n\r\nMy main concern with the class variable is visibility, but having a warning would help with that. Maybe even get the user to agree > \"are you sure you want to go ahead?\"",
"> Very cool to have add those datasets :)\r\n> I understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n> \r\n> I like the idea of the `do_tests=False` class variable.\r\n> However it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n> \r\n> If we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.\r\n\r\n`test_dummy_data=False` sounds good to me!",
"There we go: added a `test_dummy_data` class variable that is `False` by default for the `BeamBasedBuilder` and `True` for everyone else (except the new `explainlikeimfive` and `wiki_snippets`)\r\n\r\nNote that `wiki_snippets` should become obsolete as soon as @lhoestq adds in the `IndexedDataset` class",
"Great! LGTM!"
] | "2020-06-04T15:54:56Z" | "2020-06-12T15:38:55Z" | "2020-06-12T15:38:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/235.diff",
"html_url": "https://github.com/huggingface/datasets/pull/235",
"merged_at": "2020-06-12T15:38:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/235.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/235"
} | ## Adding an *experimental datasets* folder
After using the 🤗nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to share my work with the community.
My suggestion would be to add a **datasets\_experimental** folder so we can start making these new datasets public without having to completely re-think testing for every single one. We would allow contributors to submit dataset PRs in this folder, but require an explanation for why the current testing suite doesn't work for them. We can then aggregate the feedback and periodically see what's missing from the current tests.
I have added a **datasets\_experimental** folder to the repository and S3 bucket with two initial datasets: ELI5 (explainlikeimfive) and a Wikipedia Snippets dataset to support indexing (wiki\_snippets)
### ELI5
#### Dataset description
This allows people to download the [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190) dataset, along with two variants based on the r/askscience and r/AskHistorians. Full Reddit dumps for each month are downloaded from [pushshift](https://files.pushshift.io/reddit/), filtered for submissions and comments from the desired subreddits, then deleted one at a time to save space. The resulting dataset is split into a training, validation, and test dataset for r/explainlikeimfive, r/askscience, and r/AskHistorians respectively, where each item is a question along with all of its high scoring answers.
#### Issues with the current testing
1. the list of files to be downloaded is not pre-defined, but rather determined by parsing an index web page at run time. This is necessary as the name and compression type of the dump files changes from month to month as the pushshift website is maintained. Currently, the dummy folder requires the user to know which files will be downloaded.
2. to save time, the script works on the compressed files using the corresponding python packages rather than first running `download\_and\_extract` then filtering the extracted files.
### Wikipedia Snippets
#### Dataset description
This script creates a *snippets* version of a source Wikipedia dataset: each article is split into passages of fixed length which can then be indexed using ElasticSearch or a dense indexer. The script currently handles all **wikipedia** and **wiki40b** source datasets, and allows the user to choose the passage length and how much overlap they want across passages. In addition to the passage text, each snippet also has the article title, list of titles of sections covered by the text, and information to map the passage back to the initial dataset at the paragraph and character level.
#### Issues with the current testing
1. The DatasetBuilder needs to call `nlp.load_dataset()`. Currently, testing is not recursive (the test doesn't know where to find the dummy data for the source dataset)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/235/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2274/comments | https://api.github.com/repos/huggingface/datasets/issues/2274/events | https://github.com/huggingface/datasets/pull/2274 | 869,186,276 | MDExOlB1bGxSZXF1ZXN0NjI0NTkyMjQx | 2,274 | Always update metadata in arrow schema | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-04-27T19:21:57Z" | "2022-06-03T08:31:19Z" | "2021-04-29T09:57:50Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2274.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2274",
"merged_at": "2021-04-29T09:57:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2274.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2274"
} | We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types.
For each function that transforms the feature types of the dataset, I added a step in the tests to make sure the metadata in the arrow schema are up to date.
I also added a line to update the metadata directly in the Dataset.__init__ method.
This way even a dataset instantiated with __init__ will have a table with the right metadata.
Fix #2271.
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2274/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5778/comments | https://api.github.com/repos/huggingface/datasets/issues/5778/events | https://github.com/huggingface/datasets/issues/5778 | 1,678,125,951 | I_kwDODunzps5kBit_ | 5,778 | Schrödinger's dataset_dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/902005?v=4",
"events_url": "https://api.github.com/users/liujuncn/events{/privacy}",
"followers_url": "https://api.github.com/users/liujuncn/followers",
"following_url": "https://api.github.com/users/liujuncn/following{/other_user}",
"gists_url": "https://api.github.com/users/liujuncn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liujuncn",
"id": 902005,
"login": "liujuncn",
"node_id": "MDQ6VXNlcjkwMjAwNQ==",
"organizations_url": "https://api.github.com/users/liujuncn/orgs",
"received_events_url": "https://api.github.com/users/liujuncn/received_events",
"repos_url": "https://api.github.com/users/liujuncn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liujuncn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liujuncn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liujuncn"
} | [] | closed | false | null | [] | null | [
"Hi ! Passing `data_files=\"path/test.json\"` is equivalent to `data_files={\"train\": [\"path/test.json\"]}`, that's why you end up with a train split. If you don't pass `data_files=`, then split names are inferred from the data files names"
] | "2023-04-21T08:38:12Z" | "2023-07-24T15:15:14Z" | "2023-07-24T15:15:14Z" | NONE | null | null | null | ### Describe the bug
If you use load_dataset('json', data_files="path/test.json"), it will return DatasetDict({train:...}).
And if you use load_dataset("path"), it will return DatasetDict({test:...}).
Why can't the output behavior be unified?
### Steps to reproduce the bug
as description above.
### Expected behavior
consistent predictable output.
### Environment info
'2.11.0' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5778/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5778/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4564/comments | https://api.github.com/repos/huggingface/datasets/issues/4564/events | https://github.com/huggingface/datasets/pull/4564 | 1,283,932,333 | PR_kwDODunzps46UqUN | 4,564 | Support streaming bookcorpus dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-24T16:13:39Z" | "2022-07-06T09:34:48Z" | "2022-07-06T09:23:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4564.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4564",
"merged_at": "2022-07-06T09:23:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4564.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4564"
} | Support streaming bookcorpus dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4564/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4564/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3765/comments | https://api.github.com/repos/huggingface/datasets/issues/3765/events | https://github.com/huggingface/datasets/pull/3765 | 1,145,126,881 | PR_kwDODunzps4zMdIL | 3,765 | Update URL for tagging app | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [
"Oh, this URL shouldn't be updated to the tagging app as it's actually used for creating the README - closing this."
] | "2022-02-20T20:34:31Z" | "2022-02-20T20:36:10Z" | "2022-02-20T20:36:06Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3765.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3765",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3765.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3765"
} | This PR updates the URL for the tagging app to be the one on Spaces. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3765/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4058/comments | https://api.github.com/repos/huggingface/datasets/issues/4058/events | https://github.com/huggingface/datasets/pull/4058 | 1,185,611,600 | PR_kwDODunzps41RPhl | 4,058 | Updated annotations for nli_tr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/e-budur",
"id": 2246791,
"login": "e-budur",
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"repos_url": "https://api.github.com/users/e-budur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/e-budur"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you so much @[lhoestq](https://github.com/lhoestq) for the time you take to your review the PR!"
] | "2022-03-29T23:46:59Z" | "2022-04-12T20:55:12Z" | "2022-04-12T10:37:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4058.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4058",
"merged_at": "2022-04-12T10:37:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4058.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4058"
} | This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters.
The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets.
This PR is intended only for updating the annotation labels but a followup PR will focus on updating the missing sections in the `README.md` as well.
Thanks for all your time to review it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4058/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/228/comments | https://api.github.com/repos/huggingface/datasets/issues/228/events | https://github.com/huggingface/datasets/issues/228 | 629,952,402 | MDU6SXNzdWU2Mjk5NTI0MDI= | 228 | Not able to access the XNLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4",
"events_url": "https://api.github.com/users/aswin-giridhar/events{/privacy}",
"followers_url": "https://api.github.com/users/aswin-giridhar/followers",
"following_url": "https://api.github.com/users/aswin-giridhar/following{/other_user}",
"gists_url": "https://api.github.com/users/aswin-giridhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aswin-giridhar",
"id": 11817160,
"login": "aswin-giridhar",
"node_id": "MDQ6VXNlcjExODE3MTYw",
"organizations_url": "https://api.github.com/users/aswin-giridhar/orgs",
"received_events_url": "https://api.github.com/users/aswin-giridhar/received_events",
"repos_url": "https://api.github.com/users/aswin-giridhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aswin-giridhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aswin-giridhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aswin-giridhar"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
}
] | null | [
"Added pull request to change the name of the file from dataset_infos.json to dataset_info.json",
"Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? ",
"Update: The dataset_info.json error is gone, but we have a new one instead:\r\n```\r\nConnectionError: Couldn't reach https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip\r\n```\r\nI am not able to reproduce on my side unfortunately. Any idea @srush ?",
"xnli is now properly shown in the viewer.\r\nClosing this one."
] | "2020-06-03T12:25:14Z" | "2020-07-17T17:44:22Z" | "2020-07-17T17:44:22Z" | NONE | null | null | null | When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error.
```
FileNotFoundError: [Errno 2] No such file or directory: '/home/sasha/.cache/huggingface/datasets/xnli/plain_text/1.0.0/dataset_info.json'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp_viewer/run.py", line 86, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp_viewer/run.py", line 72, in get
builder_instance = builder_cls(name=conf)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__
super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory
with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f:
```
Is it possible to see if the dataset_info.json is correctly placed? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/228/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4847/comments | https://api.github.com/repos/huggingface/datasets/issues/4847/events | https://github.com/huggingface/datasets/pull/4847 | 1,338,270,636 | PR_kwDODunzps49JNWX | 4,847 | Test win ci | {
"avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4",
"events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}",
"followers_url": "https://api.github.com/users/Mr-Robot-001/followers",
"following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mr-Robot-001",
"id": 49282718,
"login": "Mr-Robot-001",
"node_id": "MDQ6VXNlcjQ5MjgyNzE4",
"organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs",
"received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events",
"repos_url": "https://api.github.com/users/Mr-Robot-001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mr-Robot-001"
} | [] | closed | false | null | [] | null | [] | "2022-08-14T14:57:00Z" | "2023-09-24T10:04:13Z" | "2022-08-14T14:57:45Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4847.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4847",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4847.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4847"
} | aa | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4847/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4847/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1665/comments | https://api.github.com/repos/huggingface/datasets/issues/1665/events | https://github.com/huggingface/datasets/pull/1665 | 776,431,087 | MDExOlB1bGxSZXF1ZXN0NTQ2OTI1NTgw | 1,665 | Add language to dataset card for Counter dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/14899066?v=4",
"events_url": "https://api.github.com/users/arkhalid/events{/privacy}",
"followers_url": "https://api.github.com/users/arkhalid/followers",
"following_url": "https://api.github.com/users/arkhalid/following{/other_user}",
"gists_url": "https://api.github.com/users/arkhalid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arkhalid",
"id": 14899066,
"login": "arkhalid",
"node_id": "MDQ6VXNlcjE0ODk5MDY2",
"organizations_url": "https://api.github.com/users/arkhalid/orgs",
"received_events_url": "https://api.github.com/users/arkhalid/received_events",
"repos_url": "https://api.github.com/users/arkhalid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arkhalid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arkhalid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arkhalid"
} | [] | closed | false | null | [] | null | [] | "2020-12-30T12:23:20Z" | "2020-12-30T17:20:20Z" | "2020-12-30T17:20:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1665.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1665",
"merged_at": "2020-12-30T17:20:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1665.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1665"
} | Add language. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1665/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6069/comments | https://api.github.com/repos/huggingface/datasets/issues/6069/events | https://github.com/huggingface/datasets/issues/6069 | 1,820,831,535 | I_kwDODunzps5sh68v | 6,069 | KeyError: dataset has no key "image" | {
"avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4",
"events_url": "https://api.github.com/users/etetteh/events{/privacy}",
"followers_url": "https://api.github.com/users/etetteh/followers",
"following_url": "https://api.github.com/users/etetteh/following{/other_user}",
"gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/etetteh",
"id": 28512232,
"login": "etetteh",
"node_id": "MDQ6VXNlcjI4NTEyMjMy",
"organizations_url": "https://api.github.com/users/etetteh/orgs",
"received_events_url": "https://api.github.com/users/etetteh/received_events",
"repos_url": "https://api.github.com/users/etetteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etetteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/etetteh"
} | [] | closed | false | null | [] | null | [
"You can list the dataset's columns with `ds.column_names` before `.map` to check whether the dataset has an `image` column. If it doesn't, then this is a bug. Otherwise, please paste the line with the `.map` call.\r\n\r\n\r\n",
"This is the piece of code I am running:\r\n```\r\ndata_transforms = utils.get_data_augmentation(args)\r\nimage_dataset = utils.load_image_dataset(args.dataset)\r\n\r\ndef resize(examples):\r\n examples[\"pixel_values\"] = [image.convert(\"RGB\").resize((300, 300)) for image in examples[\"image\"]]\r\n return examples\r\n\r\ndef preprocess_train(example_batch):\r\n print(f\"Example batch: \\n{example_batch}\")\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"train\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\ndef preprocess_val(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"val\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\nimage_dataset = image_dataset.map(resize, remove_columns=[\"image\"], batched=True)\r\n\r\nimage_dataset[\"train\"].set_transform(preprocess_train)\r\nimage_dataset[\"validation\"].set_transform(preprocess_val)\r\n```\r\n\r\nWhen I print ds.column_names I get the following\r\n`{'train': ['image', 'label'], 'validation': ['image', 'label'], 'test': ['image', 'label']}`\r\n\r\nThe `print(f\"Example batch: \\n{example_batch}\")` in the `preprocess_train` function outputs only labels without images:\r\n```\r\nExample batch: \r\n{'label': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]}\r\n```\r\n\r\nThe weird part of it all is that a sample code runs in a jupyter lab notebook without any bugs, but when I run my scripts from the terminal I get the bug. The same code.",
"The `remove_columns=[\"image\"]` argument in the `.map` call removes the `image` column from the output, so drop this argument to preserve it.",
"The problem is not with the removal of the image key. The bug is why only the labels are sent to be process, instead of all the featues or dictionary keys.\r\n\r\nP.S. I just dropped the removal argument as you've suggested, but that didn't solve the problem, because only the labels are being sent to be processed",
"All the `image_dataset.column_names` after the `map` call should also be present in `preprocess_train `/`preprocess_val` unless (input) `columns` in `set_transform` are specified.\r\n\r\nIf that's not the case, we need a full reproducer (not snippets) with the environment info.",
"I have resolved the error after including a collate function as indicated in the Quick Start session of the Datasets docs.:\r\n\r\nHere is what I did:\r\n```\r\ndata_transforms = utils.get_data_augmentation(args)\r\nimage_dataset = utils.load_image_dataset(args.dataset)\r\n\r\ndef preprocess_train(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"train\"](image.convert(\"RGB\")) for image in example_batch[\"image\"]\r\n ]\r\n return example_batch\r\n\r\ndef preprocess_val(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"val\"](image.convert(\"RGB\")) for image in example_batch[\"image\"]\r\n ]\r\n return example_batch\r\n\r\ndef collate_fn(examples):\r\n images = []\r\n labels = []\r\n for example in examples:\r\n images.append((example[\"pixel_values\"]))\r\n labels.append(example[\"label\"])\r\n\r\n pixel_values = torch.stack(images)\r\n labels = torch.tensor(labels)\r\n return {\"pixel_values\": pixel_values, \"label\": labels}\r\n\r\ntrain_dataset = image_dataset[\"train\"].with_transform(preprocess_train)\r\nval_dataset = image_dataset[\"validation\"].with_transform(preprocess_val)\r\n\r\nimage_datasets = {\r\n \"train\": train_dataset,\r\n \"val\": val_dataset\r\n}\r\n\r\nsamplers = {\r\n \"train\": data.RandomSampler(train_dataset),\r\n \"val\": data.SequentialSampler(val_dataset),\r\n}\r\n\r\ndataloaders = {\r\n x: data.DataLoader(\r\n image_datasets[x],\r\n collate_fn=collate_fn,\r\n batch_size=batch_size,\r\n sampler=samplers[x],\r\n num_workers=args.num_workers,\r\n worker_init_fn=utils.set_seed_for_worker,\r\n generator=g,\r\n pin_memory=True,\r\n )\r\n for x in [\"train\", \"val\"]\r\n}\r\n\r\ntrain_loader, val_loader = dataloaders[\"train\"], dataloaders[\"val\"]\r\n```\r\nEverything runs fine without any bug now. "
] | "2023-07-25T17:45:50Z" | "2023-07-27T12:42:17Z" | "2023-07-27T12:42:17Z" | NONE | null | null | null | ### Describe the bug
I've loaded a local image dataset with:
`ds = laod_dataset("imagefolder", data_dir=path-to-data)`
And defined a transform to process the data, following the Datasets docs.
However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function.
For some reason, the images are not in the example batches.
### Steps to reproduce the bug
I'm using the latest stable version of datasets
### Expected behavior
I expect the example_batches to contain both images and labels
### Environment info
I'm using the latest stable version of datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6069/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1306/comments | https://api.github.com/repos/huggingface/datasets/issues/1306/events | https://github.com/huggingface/datasets/pull/1306 | 759,448,427 | MDExOlB1bGxSZXF1ZXN0NTM0NDUzMTU1 | 1,306 | add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC) | {
"avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4",
"events_url": "https://api.github.com/users/aseifert/events{/privacy}",
"followers_url": "https://api.github.com/users/aseifert/followers",
"following_url": "https://api.github.com/users/aseifert/following{/other_user}",
"gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aseifert",
"id": 4944799,
"login": "aseifert",
"node_id": "MDQ6VXNlcjQ5NDQ3OTk=",
"organizations_url": "https://api.github.com/users/aseifert/orgs",
"received_events_url": "https://api.github.com/users/aseifert/received_events",
"repos_url": "https://api.github.com/users/aseifert/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseifert/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aseifert"
} | [] | closed | false | null | [] | null | [
"I created a clean PR where I also incorporated the suggested changes here: https://github.com/huggingface/datasets/pull/1449\r\n"
] | "2020-12-08T13:31:34Z" | "2020-12-10T09:53:54Z" | "2020-12-10T09:53:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1306.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1306",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1306.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1306"
} | - **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC)
- **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data
- **Paper:** https://www.aclweb.org/anthology/W19-4406/
- **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP.
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1306/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1306/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/57 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/57/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/57/comments | https://api.github.com/repos/huggingface/datasets/issues/57/events | https://github.com/huggingface/datasets/pull/57 | 614,261,638 | MDExOlB1bGxSZXF1ZXN0NDE0ODUzMDM5 | 57 | Better cached path | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I should have read this PR before doing my own: https://github.com/huggingface/nlp/pull/62 :D \r\nwill close mine. Looks great :-) ",
"> Awesome, this is really nice!\r\n> \r\n> By the way, we should improve the `cached_path` method of the `transformers` repo similarly, don't you think (@patrickvonplaten in particular).\r\n\r\nYeah, we should do the same in `transformers` I think - will note it down."
] | "2020-05-07T18:36:00Z" | "2020-05-08T13:20:30Z" | "2020-05-08T13:20:28Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/57.diff",
"html_url": "https://github.com/huggingface/datasets/pull/57",
"merged_at": "2020-05-08T13:20:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/57.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/57"
} | ### Changes:
- The `cached_path` no longer returns None if the file is missing/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error)
- Fix requests to firebase API that doesn't handle HEAD requests...
- Allow custom download in datasets script: it allows to use `tf.io.gfile.copy` for example, to download from google storage. I added an example: the `boolq` script | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/57/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/57/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1583/comments | https://api.github.com/repos/huggingface/datasets/issues/1583/events | https://github.com/huggingface/datasets/pull/1583 | 768,795,986 | MDExOlB1bGxSZXF1ZXN0NTQxMTIyODEz | 1,583 | Update metrics docstrings. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4",
"events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}",
"followers_url": "https://api.github.com/users/Fraser-Greenlee/followers",
"following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}",
"gists_url": "https://api.github.com/users/Fraser-Greenlee/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Fraser-Greenlee",
"id": 8402500,
"login": "Fraser-Greenlee",
"node_id": "MDQ6VXNlcjg0MDI1MDA=",
"organizations_url": "https://api.github.com/users/Fraser-Greenlee/orgs",
"received_events_url": "https://api.github.com/users/Fraser-Greenlee/received_events",
"repos_url": "https://api.github.com/users/Fraser-Greenlee/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Fraser-Greenlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fraser-Greenlee/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Fraser-Greenlee"
} | [] | closed | false | null | [] | null | [] | "2020-12-16T12:14:18Z" | "2020-12-18T18:39:06Z" | "2020-12-18T18:39:06Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1583.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1583",
"merged_at": "2020-12-18T18:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1583.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1583"
} | #1478 Correcting the argument descriptions for metrics.
Let me know if there's any issues.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1583/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1777/comments | https://api.github.com/repos/huggingface/datasets/issues/1777/events | https://github.com/huggingface/datasets/issues/1777 | 793,273,770 | MDU6SXNzdWU3OTMyNzM3NzA= | 1,777 | GPT2 MNLI training using run_glue.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4",
"events_url": "https://api.github.com/users/nlp-student/events{/privacy}",
"followers_url": "https://api.github.com/users/nlp-student/followers",
"following_url": "https://api.github.com/users/nlp-student/following{/other_user}",
"gists_url": "https://api.github.com/users/nlp-student/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nlp-student",
"id": 76427077,
"login": "nlp-student",
"node_id": "MDQ6VXNlcjc2NDI3MDc3",
"organizations_url": "https://api.github.com/users/nlp-student/orgs",
"received_events_url": "https://api.github.com/users/nlp-student/received_events",
"repos_url": "https://api.github.com/users/nlp-student/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nlp-student/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nlp-student/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nlp-student"
} | [] | closed | false | null | [] | null | [] | "2021-01-25T10:53:52Z" | "2021-01-25T11:12:53Z" | "2021-01-25T11:12:53Z" | NONE | null | null | null | Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets`
Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accumulation_steps 32\
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir models/gpt2/mnli/
```
I get the following error,
```
"Asking to pad but the tokenizer does not have a padding token. "
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
Do I need to modify the trainer to work with GPT2 ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1777/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3312/comments | https://api.github.com/repos/huggingface/datasets/issues/3312/events | https://github.com/huggingface/datasets/pull/3312 | 1,060,440,346 | PR_kwDODunzps4u3duV | 3,312 | add bl books genre dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davanstrien",
"id": 8995957,
"login": "davanstrien",
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davanstrien"
} | [] | closed | false | null | [] | null | [
"To fix the CI, feel free to run the `make style` command to format the code.\r\n\r\nThen it also looks like the dummy_data.zip archives are all empty, which makes the tests fail. Can you try regenerating them ? They should have one file inside which is a dummy version of the file at https://bl.iro.bl.uk/downloads/36c7cd20-c8a7-4495-acbe-469b9132c6b1?locale=en",
"@lhoestq, thanks for that feedback. \r\n\r\nI should have made most of these changes now. The `--auto_generate` flag wasn't working because the file wasn't downloaded with a `.csv` extension. I used `--match_text_files \"*\"` to get around this. Because there is a lot of data that isn't annotated using the default line number for the dummy data causes the `annotated_raw` and the `title_genre_classifiction` configs to fail because they don't generate any examples — bumping the line numbers to `250` fixes this. This does make the dummy data a bit bigger, though. \r\n\r\nThe total directory size for the dataset is now `150kb`. Is this okay, or do you want me to generate the dummy data manually instead? ",
"Hi ! yes 150kB is fine :)\r\nFeel free to push your new dummy_data.zip files (I think the current one are still the empty ones)",
"@lhoestq I've pushed those dummy files now and added your other suggestions.",
"The CI failure is unrelated to this PR, merging :)",
"@lhoestq, thanks for all your help with this pull request 😀"
] | "2021-11-22T17:54:50Z" | "2021-12-02T16:10:29Z" | "2021-12-02T16:07:47Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3312",
"merged_at": "2021-12-02T16:07:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3312"
} | First of all thanks for the fantastic library/collection of datasets 🤗
This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In addition, a subset of the data includes 'genre' information which can be used for supervised text classification tasks. I hope that this offers easier access to a dataset for doing text classification on GLAM (galleries, libraries, archives and museums) data.
I have tried to create three configurations that provide both an 'easy' version of the dataset if you want to use it for training a genre classification model and a more 'raw' version of the data for other potential use cases for the data. I am open to suggestions if this doesn't make sense.
Similarly, for some of the arrow datatypes, I have had to fall back to strings since there are missing values for some fields/rows but I may have missed a more elegant way of dealing with it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3312/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5342/comments | https://api.github.com/repos/huggingface/datasets/issues/5342/events | https://github.com/huggingface/datasets/issues/5342 | 1,485,244,178 | I_kwDODunzps5YhwcS | 5,342 | Emotion dataset cannot be downloaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/78887193?v=4",
"events_url": "https://api.github.com/users/cbarond/events{/privacy}",
"followers_url": "https://api.github.com/users/cbarond/followers",
"following_url": "https://api.github.com/users/cbarond/following{/other_user}",
"gists_url": "https://api.github.com/users/cbarond/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cbarond",
"id": 78887193,
"login": "cbarond",
"node_id": "MDQ6VXNlcjc4ODg3MTkz",
"organizations_url": "https://api.github.com/users/cbarond/orgs",
"received_events_url": "https://api.github.com/users/cbarond/received_events",
"repos_url": "https://api.github.com/users/cbarond/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cbarond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbarond/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cbarond"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"Hi @cbarond there's already an open issue at https://github.com/dair-ai/emotion_dataset/issues/5, as the data seems to be missing now, so check that issue instead 👍🏻 ",
"Thanks @cbarond for reporting and @alvarobartt for pointing to the issue we opened in the author's repo.\r\n\r\nIndeed, this issue was first raised in the \"emotion\" dataset Community tab: https://huggingface.co/datasets/emotion/discussions/3\r\n\r\nI'm closing this issue and leave the issue above for the subsequent updates.\r\n\r\nDuplicate of: https://huggingface.co/datasets/emotion/discussions/3",
"try using \"SetFit/emotion\" instead",
"> try using \"SetFit/emotion\" instead\r\n\r\nI' replaced \"emotion\" with \"SetFit/Emotion\", but the code is getting stuck at\r\n\r\n`emotions = load_dataset(\"SetFit/emotion\")`\r\n\r\nI pause execution using the debugger, and it takes me to filelock.py:226\r\n\r\n`with self._thread_lock:`\r\n\r\nDo you know a way to get past this issue?",
"thanks @honeyimholm - worked for me",
"> try using \"SetFit/emotion\" instead\r\n\r\nIt really helps a lot, thank you!",
"The dataset loading script has been fixed: https://huggingface.co/datasets/emotion/discussions/4"
] | "2022-12-08T19:07:09Z" | "2023-02-23T19:13:19Z" | "2022-12-09T10:46:11Z" | NONE | null | null | null | ### Describe the bug
The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`.
It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022).
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("emotion")
```
### Expected behavior
The dataset should load properly.
### Environment info
- `datasets` version: 2.7.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.13
- PyArrow version: 10.0.1
- Pandas version: 1.5.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5342/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5342/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/221/comments | https://api.github.com/repos/huggingface/datasets/issues/221/events | https://github.com/huggingface/datasets/pull/221 | 627,300,648 | MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0 | 221 | Fix tests/test_dataset_common.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4",
"events_url": "https://api.github.com/users/tayciryahmed/events{/privacy}",
"followers_url": "https://api.github.com/users/tayciryahmed/followers",
"following_url": "https://api.github.com/users/tayciryahmed/following{/other_user}",
"gists_url": "https://api.github.com/users/tayciryahmed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tayciryahmed",
"id": 13635495,
"login": "tayciryahmed",
"node_id": "MDQ6VXNlcjEzNjM1NDk1",
"organizations_url": "https://api.github.com/users/tayciryahmed/orgs",
"received_events_url": "https://api.github.com/users/tayciryahmed/received_events",
"repos_url": "https://api.github.com/users/tayciryahmed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tayciryahmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tayciryahmed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tayciryahmed"
} | [] | closed | false | null | [] | null | [
"Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?"
] | "2020-05-29T14:12:15Z" | "2020-06-01T12:20:42Z" | "2020-05-29T15:02:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/221",
"merged_at": "2020-05-29T15:02:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/221"
} | When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/master/src/nlp/load.py#L441) no longer has the argument `download_and_prepare_kwargs` but rather `download_config`. So here I change the tests accordingly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/221/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2903/comments | https://api.github.com/repos/huggingface/datasets/issues/2903/events | https://github.com/huggingface/datasets/pull/2903 | 995,715,191 | PR_kwDODunzps4rtxxV | 2,903 | Fix xpathopen to accept positional arguments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"thanks!"
] | "2021-09-14T08:02:50Z" | "2021-09-14T08:51:21Z" | "2021-09-14T08:40:47Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2903",
"merged_at": "2021-09-14T08:40:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2903"
} | Fix `xpathopen()` so that it also accepts positional arguments.
Fix #2901. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2903/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4956/comments | https://api.github.com/repos/huggingface/datasets/issues/4956/events | https://github.com/huggingface/datasets/pull/4956 | 1,366,475,160 | PR_kwDODunzps4-m5NU | 4,956 | Fix TF tests for 2.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-09-08T14:39:10Z" | "2022-09-08T15:16:51Z" | "2022-09-08T15:14:44Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4956.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4956",
"merged_at": "2022-09-08T15:14:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4956.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4956"
} | Fixes #4953 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4956/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2346/comments | https://api.github.com/repos/huggingface/datasets/issues/2346/events | https://github.com/huggingface/datasets/pull/2346 | 886,632,114 | MDExOlB1bGxSZXF1ZXN0NjM5OTAzMjk3 | 2,346 | Add Qasper Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cceyda",
"id": 15624271,
"login": "cceyda",
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"repos_url": "https://api.github.com/users/cceyda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cceyda"
} | [] | closed | false | null | [] | null | [
"I saw that the README [template](https://github.com/huggingface/datasets/blob/master/templates/README.md) changed while I was working on this 😅 Some TOC titles may be different but I filled it to the best of my knowledge & readme quality check passes now.\r\nready for review @lhoestq "
] | "2021-05-11T09:25:44Z" | "2021-05-18T12:28:28Z" | "2021-05-18T12:28:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2346.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2346",
"merged_at": "2021-05-18T12:28:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2346.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2346"
} | [Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP ♻️ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2346/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2346/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1621/comments | https://api.github.com/repos/huggingface/datasets/issues/1621/events | https://github.com/huggingface/datasets/pull/1621 | 772,940,417 | MDExOlB1bGxSZXF1ZXN0NTQ0MTE4MTAz | 1,621 | updated dutch_social.py for loading jsonl (lines instead of list) files | {
"avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4",
"events_url": "https://api.github.com/users/skyprince999/events{/privacy}",
"followers_url": "https://api.github.com/users/skyprince999/followers",
"following_url": "https://api.github.com/users/skyprince999/following{/other_user}",
"gists_url": "https://api.github.com/users/skyprince999/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/skyprince999",
"id": 9033954,
"login": "skyprince999",
"node_id": "MDQ6VXNlcjkwMzM5NTQ=",
"organizations_url": "https://api.github.com/users/skyprince999/orgs",
"received_events_url": "https://api.github.com/users/skyprince999/received_events",
"repos_url": "https://api.github.com/users/skyprince999/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/skyprince999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skyprince999/subscriptions",
"type": "User",
"url": "https://api.github.com/users/skyprince999"
} | [] | closed | false | null | [] | null | [] | "2020-12-22T13:18:11Z" | "2020-12-23T11:51:51Z" | "2020-12-23T11:51:51Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1621.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1621",
"merged_at": "2020-12-23T11:51:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1621.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1621"
} | the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records
Pls refer to previous PR #1321 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1621/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1621/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6338/comments | https://api.github.com/repos/huggingface/datasets/issues/6338/events | https://github.com/huggingface/datasets/pull/6338 | 1,956,886,072 | PR_kwDODunzps5dg_sb | 6,338 | pin fsspec before it switches to glob.glob | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"closing in favor of https://github.com/huggingface/datasets/pull/6337",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6338). All of your documentation changes will be reflected on that endpoint."
] | "2023-10-23T10:50:54Z" | "2023-10-23T10:57:07Z" | "2023-10-23T10:51:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6338",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6338"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6338/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6338/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3018/comments | https://api.github.com/repos/huggingface/datasets/issues/3018/events | https://github.com/huggingface/datasets/issues/3018 | 1,015,311,877 | I_kwDODunzps48hG4F | 3,018 | Support multiple zipped CSV data files | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"@lhoestq I would like to draw your attention to the proposed API by @lewtun, using `data_dir` to pass the ZIP URL.\r\n\r\nI'm not totally convinced with this... What do you think?\r\n\r\nMaybe we could discuss other approaches...\r\n\r\nOne brainstorming idea: what about using URL chaining with the hop operator in `data_files`?",
"`data_dir` is currently exclusively used for manually downloaded data.\r\n\r\nMaybe we can have an API that only uses data_files as you are suggesting, using URL chaining ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nurl = \"https://domain.org/filename.zip\"\r\ndata_files = {\"train\": \"zip://train_filename.csv::\" + url, \"test\": \"zip://test_filename.csv::\" + url}\r\ndataset = load_dataset(\"csv\", data_files=data_files)\r\n```\r\n\r\nURL chaining is used by `fsspec` to get access to files in nested filesystems of any kind. Since `fsspec` is being used by `pandas`, `dask` and also extensively by `datasets` I think it would be nice to use it here too",
"URL chaining sounds super nice to me! And it's also a nice way to leverage the same concepts we currently have in the docs around `fsspec` :)"
] | "2021-10-04T15:16:59Z" | "2021-10-05T14:32:57Z" | null | MEMBER | null | null | null | As requested by @lewtun, support loading multiple zipped CSV data files.
```python
from datasets import load_dataset
url = "https://domain.org/filename.zip"
data_files = {"train": "train_filename.csv", "test": "test_filename.csv"}
dataset = load_dataset("csv", data_dir=url, data_files=data_files)
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3018/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3123/comments | https://api.github.com/repos/huggingface/datasets/issues/3123/events | https://github.com/huggingface/datasets/issues/3123 | 1,031,793,207 | I_kwDODunzps49f-o3 | 3,123 | Segmentation fault when loading datasets from file | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-14439\r\n\r\n```python\r\nimport io\r\n\r\nimport pyarrow.json as paj\r\n\r\nbatch = b'{\"a\": [], \"b\": 1}\\n{\"b\": 1}'\r\nblock_size = 12\r\n\r\npaj.read_json(\r\n io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)\r\n)\r\n```\r\n\r\nI don't see a way to workaround this properly now without hurting the performance of the JSON loader significantly though",
"The issue has been fixed in pyarrow 6.0.0, please update pyarrow :)\r\n\r\nThe issue was due to missing fields in the JSON data of type list. Now it's working fine and missing list fields are replaced with empty lists"
] | "2021-10-20T20:16:11Z" | "2021-11-02T14:57:07Z" | "2021-11-02T14:57:07Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/
## Steps to reproduce the bug
Download an example file:
```
wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e5051507651ad/tiny_kelm.jsonl
```
Then in Python:
```
import datasets
tiny_kelm = datasets.load_dataset("json", data_files="tiny_kelm.jsonl", chunksize=100000)
```
## Expected results
a `tiny_kelm` functional dataset
## Actual results
☠️ `Segmentation fault (core dumped)` ☠️
## Environment info
- `datasets` version: 1.14.0
- Platform: Linux-5.11.0-38-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3123/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5133/comments | https://api.github.com/repos/huggingface/datasets/issues/5133/events | https://github.com/huggingface/datasets/issues/5133 | 1,413,623,462 | I_kwDODunzps5UQi6m | 5,133 | Tensor operation not functioning in dataset mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xinghaow99",
"id": 50691954,
"login": "xinghaow99",
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xinghaow99"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .",
"> Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .\r\n\r\nThank you. "
] | "2022-10-18T17:53:35Z" | "2022-10-19T04:15:45Z" | "2022-10-19T04:15:44Z" | NONE | null | null | null | ## Describe the bug
I'm doing a torch.mean() operation in data preprocessing, and it's not working.
## Steps to reproduce the bug
```
from transformers import pipeline
import torch
import numpy as np
from datasets import load_dataset
device = 'cuda:0'
raw_dataset = load_dataset("glue", "sst2")
feature_extraction = pipeline('feature-extraction', 'bert-base-uncased', device=device)
def extracted_data(examples):
# feature = torch.tensor(feature_extraction(examples['sentence'], batch_size=16), device=device)
# feature = torch.mean(feature, dim=1)
feature = np.asarray(feature_extraction(examples['sentence'], batch_size=16)).squeeze().mean(1)
print(feature.shape)
return {'feature': feature}
extracted_dataset = raw_dataset.map(extracted_data, batched=True, batch_size=16)
```
## Results
When running with torch.mean(), the shape printed out is [16, seq_len, 768], which is exactly the same before the operation. While numpy works just fine, which gives [16, 768].
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5133/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4356/comments | https://api.github.com/repos/huggingface/datasets/issues/4356/events | https://github.com/huggingface/datasets/pull/4356 | 1,236,846,308 | PR_kwDODunzps433OsB | 4,356 | Fix dataset builder default version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR requires one of these other PRs being merged first:\r\n- #4359 \r\n- huggingface/doc-builder#211"
] | "2022-05-16T09:05:10Z" | "2022-05-30T13:56:58Z" | "2022-05-30T13:47:54Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4356.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4356",
"merged_at": "2022-05-30T13:47:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4356.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4356"
} | Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.
However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and "0.0.0" is used instead:
```python
ds = load_dataset("wikipedia", language="co", date="20220501", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.')
```
with version "0.0.0" instead of "2.0.0".
See as a counter-example, when the config is present in `BUILDER_CONFIGS`:
```python
ds = load_dataset("wikipedia", "20220301.fr", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.')
```
with correct version "2.0.0", as set in the custom config class.
The reason for this is that `DatasetBuilder` has a default VERSION ("0.0.0") that overwrites the default version set at the custom config class.
This PR:
- Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version).
- Note that the `BuilderConfig` class already sets a default version = "0.0.0"; no need to pass this from the builder. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4356/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4356/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2578/comments | https://api.github.com/repos/huggingface/datasets/issues/2578/events | https://github.com/huggingface/datasets/pull/2578 | 935,187,497 | MDExOlB1bGxSZXF1ZXN0NjgyMTQ0OTY2 | 2,578 | Support Zstandard compressed files | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"> What if people want to run some tests without having zstandard ?\r\n> Usually what we do is add a decorator @require_zstandard for example\r\n\r\n@lhoestq I think I'm missing something here...\r\n\r\nTests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users of the lib. Users of the lib just `pip install datasets` and no tests are delivered with the lib (`tests` directory is outside the `src` code dir). \r\n\r\nOn the contrary, developers (contributors) of the lib do need to be able to run tests (TDD). And because of that, they are required to install datasets differently: `pip install -e .[dev]`, so that all required developing (and testing) dependencies are properly installed (included `zstandard`).\r\n\r\nApart from `zsatandard`, there are many other dev/test required dependencies for running tests, and we do not have a `@require_toto` for each and every of these dependencies in our tests: \r\n- `pytest` and `absl-py` (they are not dependencies in install_requires, but only in TEST_REQUIRE extras_require), \r\n- `boto3` (in test_filesystem.py), \r\n- `seqeval` (in test_metric_common.py), \r\n- `bs4` (used by eli5 and tested in test_hf_gcp.py)\r\n- ...\r\n\r\nSo IMHO, to run tests you should previously install datasets with dev or tests dependencies: either `pip install -e .[dev]` or `pip install -e .[tests]` (the latter to be used in CI testing-only part of the development cycle). And the tests should be written accordingly, assuming all tests dependencies are installed.",
"Hi !\r\nI was saying that because the other dependencies you mentioned are only required for _some_ tests. While here zstd is required for _all_ tests since it's imported in the conftest.py\r\nFeel free to keep it as it is right now, or maybe move the fixture to test_file_utils.py to allow users without zstd to run tests for their builders, dataset card etc. without issues",
"Thank you ! I think we can merge now",
"@lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type",
"> @lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type\r\n\r\njust for reference, i tried to stream one of the `.zst` files from [the pile](https://the-eye.eu/public/AI/pile/) using\r\n\r\n```python\r\ndata_files = [\"https://the-eye.eu/public/AI/pile/train/00.jsonl.zst\"]\r\nstreamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n```\r\n\r\nand got the following error:\r\n\r\n```\r\nUsing custom data configuration default-4e71acadc389c254\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n/tmp/ipykernel_1187680/10848115.py in <module>\r\n 1 data_files = [\"https://the-eye.eu/public/AI/pile/train/00.jsonl.zst\"]\r\n 2 \r\n----> 3 streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n 4 \r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 835 # this extends the open and os.path.join functions for data streaming\r\n 836 extend_module_for_streaming(builder_instance.__module__, use_auth_token=use_auth_token)\r\n--> 837 return builder_instance.as_streaming_dataset(\r\n 838 split=split,\r\n 839 use_auth_token=use_auth_token,\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)\r\n 922 data_dir=self.config.data_dir,\r\n 923 )\r\n--> 924 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 925 # By default, return all splits\r\n 926 if split is None:\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py in _split_generators(self, dl_manager)\r\n 50 if not self.config.data_files:\r\n 51 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 52 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 53 if isinstance(data_files, (str, list, tuple)):\r\n 54 files = data_files\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n--> 142 return self.extract(self.download(url_or_urls))\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)\r\n 115 \r\n 116 def extract(self, path_or_paths):\r\n--> 117 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n 118 return urlpaths\r\n 119 \r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 202 num_proc = 1\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n--> 204 mapped = [\r\n 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n 204 mapped = [\r\n--> 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n 207 ]\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)\r\n 141 # Singleton first to spare some computation\r\n 142 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 143 return function(data_struct)\r\n 144 \r\n 145 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)\r\n 119 \r\n 120 def _extract(self, urlpath):\r\n--> 121 protocol = self._get_extraction_protocol(urlpath)\r\n 122 if protocol is None:\r\n 123 # no extraction\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(self, urlpath)\r\n 137 elif path.endswith(\".zip\"):\r\n 138 return \"zip\"\r\n--> 139 raise NotImplementedError(f\"Extraction protocol for file at {urlpath} is not implemented yet\")\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n\r\nNotImplementedError: Extraction protocol for file at https://the-eye.eu/public/AI/pile/train/00.jsonl.zst is not implemented yet\r\n```\r\n\r\ni'm not sure whether @Shashi456 is referring to a fundamental limitation with \"streaming\" zstandard compression files or simply that we need to support the protocol in the streaming api of `datasets`\r\n\r\n",
"@lewtun our streaming mode patches the Python `open` function. I could have a look tomorrow if it is easily implementable for this case.",
"@lewtun, I have tested and yes, it is easily implementable. I've created a draft Pull Request with an implementation proposal: #2786.",
"thanks a lot @albertvillanova - now i can stream the pile :)"
] | "2021-07-01T20:22:34Z" | "2021-08-11T14:46:24Z" | "2021-07-05T10:50:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2578.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2578",
"merged_at": "2021-07-05T10:50:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2578.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2578"
} | Close #2572.
cc: @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2578/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2578/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/23 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/23/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/23/comments | https://api.github.com/repos/huggingface/datasets/issues/23/events | https://github.com/huggingface/datasets/pull/23 | 608,508,706 | MDExOlB1bGxSZXF1ZXN0NDEwMjczOTU2 | 23 | Add metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | "2020-04-28T18:02:05Z" | "2022-10-04T09:31:56Z" | "2020-05-11T08:19:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/23.diff",
"html_url": "https://github.com/huggingface/datasets/pull/23",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/23.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/23"
} | This PR is a draft for adding metrics (sacrebleu and seqeval are added)
use case examples:
`import nlp`
**sacrebleu:**
```
refs = [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
sys = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.']
sacrebleu = nlp.load_metrics('sacrebleu')
print(sacrebleu.score)
```
**seqeval:**
```
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = nlp.load_metrics('seqeval')
print(seqeval.accuracy_score(y_true, y_pred)
print(seqeval.f1_score(y_true, y_pred)
```
_examples are taken from the corresponding web page_
your comments and suggestions are more than welcomed
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/23/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/23/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/699/comments | https://api.github.com/repos/huggingface/datasets/issues/699/events | https://github.com/huggingface/datasets/issues/699 | 713,395,642 | MDU6SXNzdWU3MTMzOTU2NDI= | 699 | XNLI dataset is not loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4",
"events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}",
"followers_url": "https://api.github.com/users/imadarsh1001/followers",
"following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}",
"gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/imadarsh1001",
"id": 14936525,
"login": "imadarsh1001",
"node_id": "MDQ6VXNlcjE0OTM2NTI1",
"organizations_url": "https://api.github.com/users/imadarsh1001/orgs",
"received_events_url": "https://api.github.com/users/imadarsh1001/received_events",
"repos_url": "https://api.github.com/users/imadarsh1001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/imadarsh1001"
} | [] | closed | false | null | [] | null | [
"also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 268, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 279, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py\", line 76, in run\r\n module_path, hash = prepare_module(path)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 283, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at ./datasets/xnli/xnli.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n```\r\n\r\n",
"Hi !\r\nYes the download url changed.\r\nIt's updated on the master branch. I'm doing a release today to fix that :)",
"the issue is fixed with latest release \r\n\r\n"
] | "2020-10-02T06:53:16Z" | "2020-10-03T17:45:52Z" | "2020-10-03T17:43:37Z" | NONE | null | null | null | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```
I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/699/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1696/comments | https://api.github.com/repos/huggingface/datasets/issues/1696/events | https://github.com/huggingface/datasets/issues/1696 | 781,096,918 | MDU6SXNzdWU3ODEwOTY5MTg= | 1,696 | Unable to install datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4",
"events_url": "https://api.github.com/users/glee2429/events{/privacy}",
"followers_url": "https://api.github.com/users/glee2429/followers",
"following_url": "https://api.github.com/users/glee2429/following{/other_user}",
"gists_url": "https://api.github.com/users/glee2429/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/glee2429",
"id": 12635475,
"login": "glee2429",
"node_id": "MDQ6VXNlcjEyNjM1NDc1",
"organizations_url": "https://api.github.com/users/glee2429/orgs",
"received_events_url": "https://api.github.com/users/glee2429/received_events",
"repos_url": "https://api.github.com/users/glee2429/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/glee2429/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glee2429/subscriptions",
"type": "User",
"url": "https://api.github.com/users/glee2429"
} | [] | closed | false | null | [] | null | [
"Maybe try to create a virtual env with python 3.8 or 3.7",
"Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ",
"Damn sorry",
"Damn sorry"
] | "2021-01-07T07:24:37Z" | "2021-01-08T00:33:05Z" | "2021-01-07T22:06:05Z" | NONE | null | null | null | ** Edit **
I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight!
**Short description**
I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). However, while I tried to download datasets using `pip install datasets` I got a massive error message after getting stuck at "Installing build dependencies..."
I was wondering if this problem can be fixed by creating a virtual environment, but it didn't help. Can anyone offer some advice on how to fix this issue?
Here's an error message:
`(env) Gas-MacBook-Pro:Downloads destiny$ pip install datasets
Collecting datasets
Using cached datasets-1.2.0-py3-none-any.whl (159 kB)
Collecting numpy>=1.17
Using cached numpy-1.19.5-cp39-cp39-macosx_10_9_x86_64.whl (15.6 MB)
Collecting pyarrow>=0.17.1
Using cached pyarrow-2.0.0.tar.gz (58.9 MB)
....
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ceilf' [-Wincompatible-library-redeclaration]
int ceilf (void);
^
_configtest.c:9:5: note: 'ceilf' is a builtin with type 'float (float)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'rintf' [-Wincompatible-library-redeclaration]
int rintf (void);
^
_configtest.c:10:5: note: 'rintf' is a builtin with type 'float (float)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'truncf' [-Wincompatible-library-redeclaration]
int truncf (void);
^
_configtest.c:11:5: note: 'truncf' is a builtin with type 'float (float)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtf' [-Wincompatible-library-redeclaration]
int sqrtf (void);
^
_configtest.c:12:5: note: 'sqrtf' is a builtin with type 'float (float)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'log10f' [-Wincompatible-library-redeclaration]
int log10f (void);
^
_configtest.c:13:5: note: 'log10f' is a builtin with type 'float (float)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'logf' [-Wincompatible-library-redeclaration]
int logf (void);
^
_configtest.c:14:5: note: 'logf' is a builtin with type 'float (float)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pf' [-Wincompatible-library-redeclaration]
int log1pf (void);
^
_configtest.c:15:5: note: 'log1pf' is a builtin with type 'float (float)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'expf' [-Wincompatible-library-redeclaration]
int expf (void);
^
_configtest.c:16:5: note: 'expf' is a builtin with type 'float (float)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1f' [-Wincompatible-library-redeclaration]
int expm1f (void);
^
_configtest.c:17:5: note: 'expm1f' is a builtin with type 'float (float)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'asinf' [-Wincompatible-library-redeclaration]
int asinf (void);
^
_configtest.c:18:5: note: 'asinf' is a builtin with type 'float (float)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'acosf' [-Wincompatible-library-redeclaration]
int acosf (void);
^
_configtest.c:19:5: note: 'acosf' is a builtin with type 'float (float)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'atanf' [-Wincompatible-library-redeclaration]
int atanf (void);
^
_configtest.c:20:5: note: 'atanf' is a builtin with type 'float (float)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhf' [-Wincompatible-library-redeclaration]
int asinhf (void);
^
_configtest.c:21:5: note: 'asinhf' is a builtin with type 'float (float)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshf' [-Wincompatible-library-redeclaration]
int acoshf (void);
^
_configtest.c:22:5: note: 'acoshf' is a builtin with type 'float (float)'
_configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhf' [-Wincompatible-library-redeclaration]
int atanhf (void);
^
_configtest.c:23:5: note: 'atanhf' is a builtin with type 'float (float)'
_configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotf' [-Wincompatible-library-redeclaration]
int hypotf (void);
^
_configtest.c:24:5: note: 'hypotf' is a builtin with type 'float (float, float)'
_configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2f' [-Wincompatible-library-redeclaration]
int atan2f (void);
^
_configtest.c:25:5: note: 'atan2f' is a builtin with type 'float (float, float)'
_configtest.c:26:5: warning: incompatible redeclaration of library function 'powf' [-Wincompatible-library-redeclaration]
int powf (void);
^
_configtest.c:26:5: note: 'powf' is a builtin with type 'float (float, float)'
_configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodf' [-Wincompatible-library-redeclaration]
int fmodf (void);
^
_configtest.c:27:5: note: 'fmodf' is a builtin with type 'float (float, float)'
_configtest.c:28:5: warning: incompatible redeclaration of library function 'modff' [-Wincompatible-library-redeclaration]
int modff (void);
^
_configtest.c:28:5: note: 'modff' is a builtin with type 'float (float, float *)'
_configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpf' [-Wincompatible-library-redeclaration]
int frexpf (void);
^
_configtest.c:29:5: note: 'frexpf' is a builtin with type 'float (float, int *)'
_configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpf' [-Wincompatible-library-redeclaration]
int ldexpf (void);
^
_configtest.c:30:5: note: 'ldexpf' is a builtin with type 'float (float, int)'
_configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2f' [-Wincompatible-library-redeclaration]
int exp2f (void);
^
_configtest.c:31:5: note: 'exp2f' is a builtin with type 'float (float)'
_configtest.c:32:5: warning: incompatible redeclaration of library function 'log2f' [-Wincompatible-library-redeclaration]
int log2f (void);
^
_configtest.c:32:5: note: 'log2f' is a builtin with type 'float (float)'
_configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignf' [-Wincompatible-library-redeclaration]
int copysignf (void);
^
_configtest.c:33:5: note: 'copysignf' is a builtin with type 'float (float, float)'
_configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterf' [-Wincompatible-library-redeclaration]
int nextafterf (void);
^
_configtest.c:34:5: note: 'nextafterf' is a builtin with type 'float (float, float)'
_configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtf' [-Wincompatible-library-redeclaration]
int cbrtf (void);
^
_configtest.c:35:5: note: 'cbrtf' is a builtin with type 'float (float)'
35 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'sinl' [-Wincompatible-library-redeclaration]
int sinl (void);
^
_configtest.c:1:5: note: 'sinl' is a builtin with type 'long double (long double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cosl' [-Wincompatible-library-redeclaration]
int cosl (void);
^
_configtest.c:2:5: note: 'cosl' is a builtin with type 'long double (long double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'tanl' [-Wincompatible-library-redeclaration]
int tanl (void);
^
_configtest.c:3:5: note: 'tanl' is a builtin with type 'long double (long double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'sinhl' [-Wincompatible-library-redeclaration]
int sinhl (void);
^
_configtest.c:4:5: note: 'sinhl' is a builtin with type 'long double (long double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'coshl' [-Wincompatible-library-redeclaration]
int coshl (void);
^
_configtest.c:5:5: note: 'coshl' is a builtin with type 'long double (long double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'tanhl' [-Wincompatible-library-redeclaration]
int tanhl (void);
^
_configtest.c:6:5: note: 'tanhl' is a builtin with type 'long double (long double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'fabsl' [-Wincompatible-library-redeclaration]
int fabsl (void);
^
_configtest.c:7:5: note: 'fabsl' is a builtin with type 'long double (long double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'floorl' [-Wincompatible-library-redeclaration]
int floorl (void);
^
_configtest.c:8:5: note: 'floorl' is a builtin with type 'long double (long double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ceill' [-Wincompatible-library-redeclaration]
int ceill (void);
^
_configtest.c:9:5: note: 'ceill' is a builtin with type 'long double (long double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'rintl' [-Wincompatible-library-redeclaration]
int rintl (void);
^
_configtest.c:10:5: note: 'rintl' is a builtin with type 'long double (long double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'truncl' [-Wincompatible-library-redeclaration]
int truncl (void);
^
_configtest.c:11:5: note: 'truncl' is a builtin with type 'long double (long double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtl' [-Wincompatible-library-redeclaration]
int sqrtl (void);
^
_configtest.c:12:5: note: 'sqrtl' is a builtin with type 'long double (long double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'log10l' [-Wincompatible-library-redeclaration]
int log10l (void);
^
_configtest.c:13:5: note: 'log10l' is a builtin with type 'long double (long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'logl' [-Wincompatible-library-redeclaration]
int logl (void);
^
_configtest.c:14:5: note: 'logl' is a builtin with type 'long double (long double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pl' [-Wincompatible-library-redeclaration]
int log1pl (void);
^
_configtest.c:15:5: note: 'log1pl' is a builtin with type 'long double (long double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'expl' [-Wincompatible-library-redeclaration]
int expl (void);
^
_configtest.c:16:5: note: 'expl' is a builtin with type 'long double (long double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1l' [-Wincompatible-library-redeclaration]
int expm1l (void);
^
_configtest.c:17:5: note: 'expm1l' is a builtin with type 'long double (long double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'asinl' [-Wincompatible-library-redeclaration]
int asinl (void);
^
_configtest.c:18:5: note: 'asinl' is a builtin with type 'long double (long double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'acosl' [-Wincompatible-library-redeclaration]
int acosl (void);
^
_configtest.c:19:5: note: 'acosl' is a builtin with type 'long double (long double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'atanl' [-Wincompatible-library-redeclaration]
int atanl (void);
^
_configtest.c:20:5: note: 'atanl' is a builtin with type 'long double (long double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhl' [-Wincompatible-library-redeclaration]
int asinhl (void);
^
_configtest.c:21:5: note: 'asinhl' is a builtin with type 'long double (long double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshl' [-Wincompatible-library-redeclaration]
int acoshl (void);
^
_configtest.c:22:5: note: 'acoshl' is a builtin with type 'long double (long double)'
_configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhl' [-Wincompatible-library-redeclaration]
int atanhl (void);
^
_configtest.c:23:5: note: 'atanhl' is a builtin with type 'long double (long double)'
_configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotl' [-Wincompatible-library-redeclaration]
int hypotl (void);
^
_configtest.c:24:5: note: 'hypotl' is a builtin with type 'long double (long double, long double)'
_configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2l' [-Wincompatible-library-redeclaration]
int atan2l (void);
^
_configtest.c:25:5: note: 'atan2l' is a builtin with type 'long double (long double, long double)'
_configtest.c:26:5: warning: incompatible redeclaration of library function 'powl' [-Wincompatible-library-redeclaration]
int powl (void);
^
_configtest.c:26:5: note: 'powl' is a builtin with type 'long double (long double, long double)'
_configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodl' [-Wincompatible-library-redeclaration]
int fmodl (void);
^
_configtest.c:27:5: note: 'fmodl' is a builtin with type 'long double (long double, long double)'
_configtest.c:28:5: warning: incompatible redeclaration of library function 'modfl' [-Wincompatible-library-redeclaration]
int modfl (void);
^
_configtest.c:28:5: note: 'modfl' is a builtin with type 'long double (long double, long double *)'
_configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpl' [-Wincompatible-library-redeclaration]
int frexpl (void);
^
_configtest.c:29:5: note: 'frexpl' is a builtin with type 'long double (long double, int *)'
_configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpl' [-Wincompatible-library-redeclaration]
int ldexpl (void);
^
_configtest.c:30:5: note: 'ldexpl' is a builtin with type 'long double (long double, int)'
_configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2l' [-Wincompatible-library-redeclaration]
int exp2l (void);
^
_configtest.c:31:5: note: 'exp2l' is a builtin with type 'long double (long double)'
_configtest.c:32:5: warning: incompatible redeclaration of library function 'log2l' [-Wincompatible-library-redeclaration]
int log2l (void);
^
_configtest.c:32:5: note: 'log2l' is a builtin with type 'long double (long double)'
_configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignl' [-Wincompatible-library-redeclaration]
int copysignl (void);
^
_configtest.c:33:5: note: 'copysignl' is a builtin with type 'long double (long double, long double)'
_configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterl' [-Wincompatible-library-redeclaration]
int nextafterl (void);
^
_configtest.c:34:5: note: 'nextafterl' is a builtin with type 'long double (long double, long double)'
_configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtl' [-Wincompatible-library-redeclaration]
int cbrtl (void);
^
_configtest.c:35:5: note: 'cbrtl' is a builtin with type 'long double (long double)'
35 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:8:12: error: use of undeclared identifier 'HAVE_DECL_SIGNBIT'
(void) HAVE_DECL_SIGNBIT;
^
1 error generated.
failure.
removing: _configtest.c _configtest.o
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabs' [-Wincompatible-library-redeclaration]
int cabs (void);
^
_configtest.c:1:5: note: 'cabs' is a builtin with type 'double (_Complex double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacos' [-Wincompatible-library-redeclaration]
int cacos (void);
^
_configtest.c:2:5: note: 'cacos' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacosh' [-Wincompatible-library-redeclaration]
int cacosh (void);
^
_configtest.c:3:5: note: 'cacosh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'carg' [-Wincompatible-library-redeclaration]
int carg (void);
^
_configtest.c:4:5: note: 'carg' is a builtin with type 'double (_Complex double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casin' [-Wincompatible-library-redeclaration]
int casin (void);
^
_configtest.c:5:5: note: 'casin' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinh' [-Wincompatible-library-redeclaration]
int casinh (void);
^
_configtest.c:6:5: note: 'casinh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catan' [-Wincompatible-library-redeclaration]
int catan (void);
^
_configtest.c:7:5: note: 'catan' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanh' [-Wincompatible-library-redeclaration]
int catanh (void);
^
_configtest.c:8:5: note: 'catanh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccos' [-Wincompatible-library-redeclaration]
int ccos (void);
^
_configtest.c:9:5: note: 'ccos' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccosh' [-Wincompatible-library-redeclaration]
int ccosh (void);
^
_configtest.c:10:5: note: 'ccosh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexp' [-Wincompatible-library-redeclaration]
int cexp (void);
^
_configtest.c:11:5: note: 'cexp' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimag' [-Wincompatible-library-redeclaration]
int cimag (void);
^
_configtest.c:12:5: note: 'cimag' is a builtin with type 'double (_Complex double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clog' [-Wincompatible-library-redeclaration]
int clog (void);
^
_configtest.c:13:5: note: 'clog' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conj' [-Wincompatible-library-redeclaration]
int conj (void);
^
_configtest.c:14:5: note: 'conj' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpow' [-Wincompatible-library-redeclaration]
int cpow (void);
^
_configtest.c:15:5: note: 'cpow' is a builtin with type '_Complex double (_Complex double, _Complex double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cproj' [-Wincompatible-library-redeclaration]
int cproj (void);
^
_configtest.c:16:5: note: 'cproj' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'creal' [-Wincompatible-library-redeclaration]
int creal (void);
^
_configtest.c:17:5: note: 'creal' is a builtin with type 'double (_Complex double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csin' [-Wincompatible-library-redeclaration]
int csin (void);
^
_configtest.c:18:5: note: 'csin' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinh' [-Wincompatible-library-redeclaration]
int csinh (void);
^
_configtest.c:19:5: note: 'csinh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrt' [-Wincompatible-library-redeclaration]
int csqrt (void);
^
_configtest.c:20:5: note: 'csqrt' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctan' [-Wincompatible-library-redeclaration]
int ctan (void);
^
_configtest.c:21:5: note: 'ctan' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanh' [-Wincompatible-library-redeclaration]
int ctanh (void);
^
_configtest.c:22:5: note: 'ctanh' is a builtin with type '_Complex double (_Complex double)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsf' [-Wincompatible-library-redeclaration]
int cabsf (void);
^
_configtest.c:1:5: note: 'cabsf' is a builtin with type 'float (_Complex float)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosf' [-Wincompatible-library-redeclaration]
int cacosf (void);
^
_configtest.c:2:5: note: 'cacosf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshf' [-Wincompatible-library-redeclaration]
int cacoshf (void);
^
_configtest.c:3:5: note: 'cacoshf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'cargf' [-Wincompatible-library-redeclaration]
int cargf (void);
^
_configtest.c:4:5: note: 'cargf' is a builtin with type 'float (_Complex float)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casinf' [-Wincompatible-library-redeclaration]
int casinf (void);
^
_configtest.c:5:5: note: 'casinf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhf' [-Wincompatible-library-redeclaration]
int casinhf (void);
^
_configtest.c:6:5: note: 'casinhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catanf' [-Wincompatible-library-redeclaration]
int catanf (void);
^
_configtest.c:7:5: note: 'catanf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhf' [-Wincompatible-library-redeclaration]
int catanhf (void);
^
_configtest.c:8:5: note: 'catanhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosf' [-Wincompatible-library-redeclaration]
int ccosf (void);
^
_configtest.c:9:5: note: 'ccosf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshf' [-Wincompatible-library-redeclaration]
int ccoshf (void);
^
_configtest.c:10:5: note: 'ccoshf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpf' [-Wincompatible-library-redeclaration]
int cexpf (void);
^
_configtest.c:11:5: note: 'cexpf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagf' [-Wincompatible-library-redeclaration]
int cimagf (void);
^
_configtest.c:12:5: note: 'cimagf' is a builtin with type 'float (_Complex float)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clogf' [-Wincompatible-library-redeclaration]
int clogf (void);
^
_configtest.c:13:5: note: 'clogf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conjf' [-Wincompatible-library-redeclaration]
int conjf (void);
^
_configtest.c:14:5: note: 'conjf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowf' [-Wincompatible-library-redeclaration]
int cpowf (void);
^
_configtest.c:15:5: note: 'cpowf' is a builtin with type '_Complex float (_Complex float, _Complex float)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojf' [-Wincompatible-library-redeclaration]
int cprojf (void);
^
_configtest.c:16:5: note: 'cprojf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'crealf' [-Wincompatible-library-redeclaration]
int crealf (void);
^
_configtest.c:17:5: note: 'crealf' is a builtin with type 'float (_Complex float)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csinf' [-Wincompatible-library-redeclaration]
int csinf (void);
^
_configtest.c:18:5: note: 'csinf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhf' [-Wincompatible-library-redeclaration]
int csinhf (void);
^
_configtest.c:19:5: note: 'csinhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtf' [-Wincompatible-library-redeclaration]
int csqrtf (void);
^
_configtest.c:20:5: note: 'csqrtf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanf' [-Wincompatible-library-redeclaration]
int ctanf (void);
^
_configtest.c:21:5: note: 'ctanf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhf' [-Wincompatible-library-redeclaration]
int ctanhf (void);
^
_configtest.c:22:5: note: 'ctanhf' is a builtin with type '_Complex float (_Complex float)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsl' [-Wincompatible-library-redeclaration]
int cabsl (void);
^
_configtest.c:1:5: note: 'cabsl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosl' [-Wincompatible-library-redeclaration]
int cacosl (void);
^
_configtest.c:2:5: note: 'cacosl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshl' [-Wincompatible-library-redeclaration]
int cacoshl (void);
^
_configtest.c:3:5: note: 'cacoshl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'cargl' [-Wincompatible-library-redeclaration]
int cargl (void);
^
_configtest.c:4:5: note: 'cargl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casinl' [-Wincompatible-library-redeclaration]
int casinl (void);
^
_configtest.c:5:5: note: 'casinl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhl' [-Wincompatible-library-redeclaration]
int casinhl (void);
^
_configtest.c:6:5: note: 'casinhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catanl' [-Wincompatible-library-redeclaration]
int catanl (void);
^
_configtest.c:7:5: note: 'catanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhl' [-Wincompatible-library-redeclaration]
int catanhl (void);
^
_configtest.c:8:5: note: 'catanhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosl' [-Wincompatible-library-redeclaration]
int ccosl (void);
^
_configtest.c:9:5: note: 'ccosl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshl' [-Wincompatible-library-redeclaration]
int ccoshl (void);
^
_configtest.c:10:5: note: 'ccoshl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpl' [-Wincompatible-library-redeclaration]
int cexpl (void);
^
_configtest.c:11:5: note: 'cexpl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagl' [-Wincompatible-library-redeclaration]
int cimagl (void);
^
_configtest.c:12:5: note: 'cimagl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clogl' [-Wincompatible-library-redeclaration]
int clogl (void);
^
_configtest.c:13:5: note: 'clogl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conjl' [-Wincompatible-library-redeclaration]
int conjl (void);
^
_configtest.c:14:5: note: 'conjl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowl' [-Wincompatible-library-redeclaration]
int cpowl (void);
^
_configtest.c:15:5: note: 'cpowl' is a builtin with type '_Complex long double (_Complex long double, _Complex long double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojl' [-Wincompatible-library-redeclaration]
int cprojl (void);
^
_configtest.c:16:5: note: 'cprojl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'creall' [-Wincompatible-library-redeclaration]
int creall (void);
^
_configtest.c:17:5: note: 'creall' is a builtin with type 'long double (_Complex long double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csinl' [-Wincompatible-library-redeclaration]
int csinl (void);
^
_configtest.c:18:5: note: 'csinl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhl' [-Wincompatible-library-redeclaration]
int csinhl (void);
^
_configtest.c:19:5: note: 'csinhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtl' [-Wincompatible-library-redeclaration]
int csqrtl (void);
^
_configtest.c:20:5: note: 'csqrtl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanl' [-Wincompatible-library-redeclaration]
int ctanl (void);
^
_configtest.c:21:5: note: 'ctanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhl' [-Wincompatible-library-redeclaration]
int ctanhl (void);
^
_configtest.c:22:5: note: 'ctanhl' is a builtin with type '_Complex long double (_Complex long double)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:2:12: warning: unused function 'static_func' [-Wunused-function]
static int static_func (char * restrict a)
^
1 warning generated.
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:3:19: warning: unused function 'static_func' [-Wunused-function]
static inline int static_func (void)
^
1 warning generated.
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h
#define SIZEOF_PY_INTPTR_T 8
#define SIZEOF_OFF_T 8
#define SIZEOF_PY_LONG_LONG 8
#define MATHLIB
#define HAVE_SIN 1
#define HAVE_COS 1
#define HAVE_TAN 1
#define HAVE_SINH 1
#define HAVE_COSH 1
#define HAVE_TANH 1
#define HAVE_FABS 1
#define HAVE_FLOOR 1
#define HAVE_CEIL 1
#define HAVE_SQRT 1
#define HAVE_LOG10 1
#define HAVE_LOG 1
#define HAVE_EXP 1
#define HAVE_ASIN 1
#define HAVE_ACOS 1
#define HAVE_ATAN 1
#define HAVE_FMOD 1
#define HAVE_MODF 1
#define HAVE_FREXP 1
#define HAVE_LDEXP 1
#define HAVE_RINT 1
#define HAVE_TRUNC 1
#define HAVE_EXP2 1
#define HAVE_LOG2 1
#define HAVE_ATAN2 1
#define HAVE_POW 1
#define HAVE_NEXTAFTER 1
#define HAVE_STRTOLL 1
#define HAVE_STRTOULL 1
#define HAVE_CBRT 1
#define HAVE_STRTOLD_L 1
#define HAVE_BACKTRACE 1
#define HAVE_MADVISE 1
#define HAVE_XMMINTRIN_H 1
#define HAVE_EMMINTRIN_H 1
#define HAVE_XLOCALE_H 1
#define HAVE_DLFCN_H 1
#define HAVE_SYS_MMAN_H 1
#define HAVE___BUILTIN_ISNAN 1
#define HAVE___BUILTIN_ISINF 1
#define HAVE___BUILTIN_ISFINITE 1
#define HAVE___BUILTIN_BSWAP32 1
#define HAVE___BUILTIN_BSWAP64 1
#define HAVE___BUILTIN_EXPECT 1
#define HAVE___BUILTIN_MUL_OVERFLOW 1
#define HAVE___BUILTIN_CPU_SUPPORTS 1
#define HAVE__M_FROM_INT64 1
#define HAVE__MM_LOAD_PS 1
#define HAVE__MM_PREFETCH 1
#define HAVE__MM_LOAD_PD 1
#define HAVE___BUILTIN_PREFETCH 1
#define HAVE_LINK_AVX 1
#define HAVE_LINK_AVX2 1
#define HAVE_XGETBV 1
#define HAVE_ATTRIBUTE_NONNULL 1
#define HAVE_ATTRIBUTE_TARGET_AVX 1
#define HAVE_ATTRIBUTE_TARGET_AVX2 1
#define HAVE___THREAD 1
#define HAVE_SINF 1
#define HAVE_COSF 1
#define HAVE_TANF 1
#define HAVE_SINHF 1
#define HAVE_COSHF 1
#define HAVE_TANHF 1
#define HAVE_FABSF 1
#define HAVE_FLOORF 1
#define HAVE_CEILF 1
#define HAVE_RINTF 1
#define HAVE_TRUNCF 1
#define HAVE_SQRTF 1
#define HAVE_LOG10F 1
#define HAVE_LOGF 1
#define HAVE_LOG1PF 1
#define HAVE_EXPF 1
#define HAVE_EXPM1F 1
#define HAVE_ASINF 1
#define HAVE_ACOSF 1
#define HAVE_ATANF 1
#define HAVE_ASINHF 1
#define HAVE_ACOSHF 1
#define HAVE_ATANHF 1
#define HAVE_HYPOTF 1
#define HAVE_ATAN2F 1
#define HAVE_POWF 1
#define HAVE_FMODF 1
#define HAVE_MODFF 1
#define HAVE_FREXPF 1
#define HAVE_LDEXPF 1
#define HAVE_EXP2F 1
#define HAVE_LOG2F 1
#define HAVE_COPYSIGNF 1
#define HAVE_NEXTAFTERF 1
#define HAVE_CBRTF 1
#define HAVE_SINL 1
#define HAVE_COSL 1
#define HAVE_TANL 1
#define HAVE_SINHL 1
#define HAVE_COSHL 1
#define HAVE_TANHL 1
#define HAVE_FABSL 1
#define HAVE_FLOORL 1
#define HAVE_CEILL 1
#define HAVE_RINTL 1
#define HAVE_TRUNCL 1
#define HAVE_SQRTL 1
#define HAVE_LOG10L 1
#define HAVE_LOGL 1
#define HAVE_LOG1PL 1
#define HAVE_EXPL 1
#define HAVE_EXPM1L 1
#define HAVE_ASINL 1
#define HAVE_ACOSL 1
#define HAVE_ATANL 1
#define HAVE_ASINHL 1
#define HAVE_ACOSHL 1
#define HAVE_ATANHL 1
#define HAVE_HYPOTL 1
#define HAVE_ATAN2L 1
#define HAVE_POWL 1
#define HAVE_FMODL 1
#define HAVE_MODFL 1
#define HAVE_FREXPL 1
#define HAVE_LDEXPL 1
#define HAVE_EXP2L 1
#define HAVE_LOG2L 1
#define HAVE_COPYSIGNL 1
#define HAVE_NEXTAFTERL 1
#define HAVE_CBRTL 1
#define HAVE_DECL_SIGNBIT
#define HAVE_COMPLEX_H 1
#define HAVE_CABS 1
#define HAVE_CACOS 1
#define HAVE_CACOSH 1
#define HAVE_CARG 1
#define HAVE_CASIN 1
#define HAVE_CASINH 1
#define HAVE_CATAN 1
#define HAVE_CATANH 1
#define HAVE_CCOS 1
#define HAVE_CCOSH 1
#define HAVE_CEXP 1
#define HAVE_CIMAG 1
#define HAVE_CLOG 1
#define HAVE_CONJ 1
#define HAVE_CPOW 1
#define HAVE_CPROJ 1
#define HAVE_CREAL 1
#define HAVE_CSIN 1
#define HAVE_CSINH 1
#define HAVE_CSQRT 1
#define HAVE_CTAN 1
#define HAVE_CTANH 1
#define HAVE_CABSF 1
#define HAVE_CACOSF 1
#define HAVE_CACOSHF 1
#define HAVE_CARGF 1
#define HAVE_CASINF 1
#define HAVE_CASINHF 1
#define HAVE_CATANF 1
#define HAVE_CATANHF 1
#define HAVE_CCOSF 1
#define HAVE_CCOSHF 1
#define HAVE_CEXPF 1
#define HAVE_CIMAGF 1
#define HAVE_CLOGF 1
#define HAVE_CONJF 1
#define HAVE_CPOWF 1
#define HAVE_CPROJF 1
#define HAVE_CREALF 1
#define HAVE_CSINF 1
#define HAVE_CSINHF 1
#define HAVE_CSQRTF 1
#define HAVE_CTANF 1
#define HAVE_CTANHF 1
#define HAVE_CABSL 1
#define HAVE_CACOSL 1
#define HAVE_CACOSHL 1
#define HAVE_CARGL 1
#define HAVE_CASINL 1
#define HAVE_CASINHL 1
#define HAVE_CATANL 1
#define HAVE_CATANHL 1
#define HAVE_CCOSL 1
#define HAVE_CCOSHL 1
#define HAVE_CEXPL 1
#define HAVE_CIMAGL 1
#define HAVE_CLOGL 1
#define HAVE_CONJL 1
#define HAVE_CPOWL 1
#define HAVE_CPROJL 1
#define HAVE_CREALL 1
#define HAVE_CSINL 1
#define HAVE_CSINHL 1
#define HAVE_CSQRTL 1
#define HAVE_CTANL 1
#define HAVE_CTANHL 1
#define NPY_RESTRICT restrict
#define NPY_RELAXED_STRIDES_CHECKING 1
#define HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE 1
#define NPY_PY3K 1
#ifndef __cplusplus
/* #undef inline */
#endif
#ifndef _NPY_NPY_CONFIG_H_
#error config.h should never be included directly, include npy_config.h instead
#endif
EOF
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources.
Generating build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'exp' [-Wincompatible-library-redeclaration]
int exp (void);
^
_configtest.c:1:5: note: 'exp' is a builtin with type 'double (double)'
1 warning generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h
#define NPY_SIZEOF_SHORT SIZEOF_SHORT
#define NPY_SIZEOF_INT SIZEOF_INT
#define NPY_SIZEOF_LONG SIZEOF_LONG
#define NPY_SIZEOF_FLOAT 4
#define NPY_SIZEOF_COMPLEX_FLOAT 8
#define NPY_SIZEOF_DOUBLE 8
#define NPY_SIZEOF_COMPLEX_DOUBLE 16
#define NPY_SIZEOF_LONGDOUBLE 16
#define NPY_SIZEOF_COMPLEX_LONGDOUBLE 32
#define NPY_SIZEOF_PY_INTPTR_T 8
#define NPY_SIZEOF_OFF_T 8
#define NPY_SIZEOF_PY_LONG_LONG 8
#define NPY_SIZEOF_LONGLONG 8
#define NPY_NO_SMP 0
#define NPY_HAVE_DECL_ISNAN
#define NPY_HAVE_DECL_ISINF
#define NPY_HAVE_DECL_ISFINITE
#define NPY_HAVE_DECL_SIGNBIT
#define NPY_USE_C99_COMPLEX 1
#define NPY_HAVE_COMPLEX_DOUBLE 1
#define NPY_HAVE_COMPLEX_FLOAT 1
#define NPY_HAVE_COMPLEX_LONG_DOUBLE 1
#define NPY_RELAXED_STRIDES_CHECKING 1
#define NPY_USE_C99_FORMATS 1
#define NPY_VISIBILITY_HIDDEN __attribute__((visibility("hidden")))
#define NPY_ABI_VERSION 0x01000009
#define NPY_API_VERSION 0x0000000D
#ifndef __STDC_FORMAT_MACROS
#define __STDC_FORMAT_MACROS 1
#endif
EOF
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources.
numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h']
building extension "numpy.core._multiarray_tests" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c
building extension "numpy.core._multiarray_umath" sources
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources.
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources.
executing numpy/core/code_generators/generate_ufunc_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h' to sources.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c
creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath' to include_dirs.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath' to include_dirs.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common' to include_dirs.
numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_internal.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h']
building extension "numpy.core._umath_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c
building extension "numpy.core._rational_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c
building extension "numpy.core._struct_ufunc_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c
building extension "numpy.core._operand_flag_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c
building extension "numpy.fft.fftpack_lite" sources
building extension "numpy.linalg.lapack_lite" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/linalg
adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
building extension "numpy.linalg._umath_linalg" sources
adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c
building extension "numpy.random.mtrand" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/random
building data_files sources
build_src: building npy-pkg config files
running build_py
creating build/lib.macosx-10.15-x86_64-3.9
creating build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/conftest.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_globals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/dual.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_distributor_init.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/ctypeslib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/matlib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_pytesttester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying build/src.macosx-10.15-x86_64-3.9/numpy/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
creating build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/py3k.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/_inspect.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
creating build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/umath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/fromnumeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_dtype.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_add_newdocs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_methods.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_internal.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_string_helpers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/multiarray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/records.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/setup_common.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_aliased_types.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/memmap.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/overrides.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/getlimits.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_dtype_ctypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/defchararray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/machar.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/numeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/einsumfunc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/umath_tests.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/numerictypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_type_aliases.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/cversions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/arrayprint.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/code_generators/generate_numpy_api.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/unixccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/numpy_distribution.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/conv_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/cpuinfo.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/msvc9compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/npy_pkg_config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/compat.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/misc_util.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/log.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/line_endings.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/lib2def.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/pathccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/system_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/exec_command.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/from_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/mingw32ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/extension.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/msvccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/intelccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying build/src.macosx-10.15-x86_64-3.9/numpy/distutils/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/config_compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_ext.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_headers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_py.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_src.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/sdist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_scripts.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/bdist_rpm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/autodist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/egg_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/develop.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_data.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/gnu.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/compaq.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/intel.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/none.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/nag.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/pg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/ibm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/sun.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/lahey.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/g95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/mips.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/hpux.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/environment.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/pathf95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/absoft.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/vast.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
creating build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/misc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/internals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/creation.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/constants.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/ufuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/broadcasting.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/basics.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/subclassing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/indexing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/byteswapping.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/structured_arrays.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/glossary.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
creating build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/cfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/common_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/crackfortran.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/cb_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f2py2e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/func2subr.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/diagnose.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/capi_maps.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f90mod_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f2py_testing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/use_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/auxfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__main__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
creating build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/helper.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/fftpack.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
creating build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_iotools.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/mixins.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/nanfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/recfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/histograms.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/scimath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/user_array.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/format.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/twodim_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/financial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/index_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/npyio.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/stride_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arrayterator.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arraysetops.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arraypad.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/type_check.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_datasource.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/ufunclike.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
creating build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/linalg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
creating build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/extras.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/testutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/bench.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/timer_comparison.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/mrecords.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
creating build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/defmatrix.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
creating build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/laguerre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/_polybase.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/polyutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/hermite_e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/chebyshev.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/legendre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/hermite.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
creating build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/print_coercion_tables.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/parameterized.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
running build_clib
customize UnixCCompiler
customize UnixCCompiler using build_clib
building 'npymath' library
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9
creating build/temp.macosx-10.15-x86_64-3.9/numpy
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath
creating build/temp.macosx-10.15-x86_64-3.9/build
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath
compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/core/src/npymath/npy_math.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c
clang: numpy/core/src/npymath/halffloat.c
numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]
static const volatile npy_float tiny = 3.9443045e-31f;
^
numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]
static const npy_cfloat c_halff = {0.5F, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]
static const npy_cfloat c_if = {0.0, 1.0F};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]
static const npy_cfloat c_ihalff = {0.0, 0.5F};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]
caddf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]
csubf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]
cnegf(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]
cmulif(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]
static const npy_cdouble c_half = {0.5, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]
static const npy_cdouble c_i = {0.0, 1.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]
static const npy_cdouble c_ihalf = {0.0, 0.5};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]
cadd(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]
csub(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]
cneg(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]
cmuli(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]
static const npy_clongdouble c_halfl = {0.5L, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]
static const npy_clongdouble c_il = {0.0, 1.0L};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]
static const npy_clongdouble c_ihalfl = {0.0, 0.5L};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]
caddl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]
csubl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]
cnegl(npy_clongdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]
cmulil(npy_clongdouble a)
^
22 warnings generated.
ar: adding 4 object files to build/temp.macosx-10.15-x86_64-3.9/libnpymath.a
ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpymath.a
building 'npysort' library
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort
compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/quicksort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/mergesort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/heapsort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/selection.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/binsearch.c
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
22 warnings generated.
ar: adding 5 object files to build/temp.macosx-10.15-x86_64-3.9/libnpysort.a
ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpysort.a
running build_ext
customize UnixCCompiler
customize UnixCCompiler using build_ext
building 'numpy.core._dummy' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/core/src/dummymodule.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/dummymodule.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_dummy.cpython-39-darwin.so
building 'numpy.core._multiarray_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c
clang: numpy/core/src/common/mem_overlap.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_tests.cpython-39-darwin.so
building 'numpy.core._multiarray_umath' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath
creating build/temp.macosx-10.15-x86_64-3.9/private
creating build/temp.macosx-10.15-x86_64-3.9/private/var
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: numpy/core/src/multiarray/alloc.c
clang: numpy/core/src/multiarray/calculation.cclang: numpy/core/src/multiarray/array_assign_scalar.c
clang: numpy/core/src/multiarray/convert.c
clang: numpy/core/src/multiarray/ctors.c
clang: numpy/core/src/multiarray/datetime_busday.c
clang: numpy/core/src/multiarray/dragon4.cclang: numpy/core/src/multiarray/flagsobject.c
numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/ctors.c:2261:36: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/arrayobject.c
clang: numpy/core/src/multiarray/array_assign_array.c
clang: numpy/core/src/multiarray/convert_datatype.c
clang: numpy/core/src/multiarray/getset.c
clang: numpy/core/src/multiarray/datetime_busdaycal.c
clang: numpy/core/src/multiarray/buffer.c
clang: numpy/core/src/multiarray/compiled_base.c
clang: numpy/core/src/multiarray/hashdescr.c
clang: numpy/core/src/multiarray/descriptor.c
numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:453:13: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/conversion_utils.c
clang: numpy/core/src/multiarray/item_selection.c
clang: numpy/core/src/multiarray/dtype_transfer.c
clang: numpy/core/src/multiarray/mapping.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c
3 warnings generated.
clang: numpy/core/src/multiarray/datetime.c
numpy/core/src/multiarray/arraytypes.c.src:477:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ptr = PyUnicode_AS_UNICODE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/common.c
numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:187:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
6 warnings generated.
clang: numpy/core/src/multiarray/nditer_pywrap.c
9 warnings generated.
clang: numpy/core/src/multiarray/sequence.c
clang: numpy/core/src/multiarray/shape.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c
clang: numpy/core/src/multiarray/methods.c
clang: numpy/core/src/multiarray/iterators.c
clang: numpy/core/src/multiarray/datetime_strings.c
clang: numpy/core/src/multiarray/number.c
clang: numpy/core/src/multiarray/scalarapi.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c
numpy/core/src/multiarray/scalarapi.c:74:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
return (void *)PyUnicode_AS_DATA(scalar);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:135:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
return (void *)PyUnicode_AS_DATA(scalar);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ip = dptr = PyUnicode_AS_UNICODE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
new = PyUnicode_FromUnicode(ip, len);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ip = dptr = PyUnicode_AS_UNICODE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
new = PyUnicode_FromUnicode(ip, len);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1849:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
buffer = PyUnicode_AS_DATA(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
5 warnings generated.
clang: numpy/core/src/multiarray/typeinfo.c
clang: numpy/core/src/multiarray/refcount.c
clang: numpy/core/src/multiarray/usertypes.c
clang: numpy/core/src/multiarray/multiarraymodule.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c
clang: numpy/core/src/multiarray/vdot.c
clang: numpy/core/src/umath/umathmodule.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c
clang: numpy/core/src/umath/reduction.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c
clang: numpy/core/src/multiarray/nditer_api.c
14 warnings generated.
clang: numpy/core/src/multiarray/strfuncs.c
numpy/core/src/umath/loops.c.src:655:18: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
result = PyEval_CallObject(tocall, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/strfuncs.c:178:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
s = PyEval_CallObject(PyArray_ReprFunction, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/strfuncs.c:195:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
s = PyEval_CallObject(PyArray_StrFunction, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
2 warnings generated.
clang: numpy/core/src/multiarray/temp_elide.c
clang: numpy/core/src/umath/cpuid.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c
clang: numpy/core/src/umath/ufunc_object.c
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'byte_long' [-Wunused-function]
byte_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ubyte_long' [-Wunused-function]
ubyte_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'short_long' [-Wunused-function]
short_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ushort_long' [-Wunused-function]
ushort_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'int_long' [-Wunused-function]
int_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'uint_long' [-Wunused-function]
uint_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'long_long' [-Wunused-function]
long_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulong_long' [-Wunused-function]
ulong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longlong_long' [-Wunused-function]
longlong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulonglong_long' [-Wunused-function]
ulonglong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'half_long' [-Wunused-function]
half_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'float_long' [-Wunused-function]
float_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'double_long' [-Wunused-function]
double_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longdouble_long' [-Wunused-function]
longdouble_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cfloat_long' [-Wunused-function]
cfloat_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cdouble_long' [-Wunused-function]
cdouble_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'clongdouble_long' [-Wunused-function]
clongdouble_long(PyObject *obj)
^
clang: numpy/core/src/multiarray/nditer_constr.c
numpy/core/src/umath/ufunc_object.c:657:19: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (i = 0; i < len; i++) {
~ ^ ~~~
clang: numpy/core/src/umath/override.c
clang: numpy/core/src/npymath/npy_math.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c
numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]
static const volatile npy_float tiny = 3.9443045e-31f;
^
numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]
static const npy_cfloat c_halff = {0.5F, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]
static const npy_cfloat c_if = {0.0, 1.0F};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]
static const npy_cfloat c_ihalff = {0.0, 0.5F};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]
caddf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]
csubf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]
cnegf(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]
cmulif(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]
static const npy_cdouble c_half = {0.5, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]
static const npy_cdouble c_i = {0.0, 1.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]
static const npy_cdouble c_ihalf = {0.0, 0.5};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]
cadd(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]
csub(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]
cneg(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]
cmuli(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]
static const npy_clongdouble c_halfl = {0.5L, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]
static const npy_clongdouble c_il = {0.0, 1.0L};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]
static const npy_clongdouble c_ihalfl = {0.0, 0.5L};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]
caddl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]
csubl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]
cnegl(npy_clongdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]
cmulil(npy_clongdouble a)
^
22 warnings generated.
clang: numpy/core/src/common/mem_overlap.c
clang: numpy/core/src/npymath/halffloat.c
clang: numpy/core/src/common/array_assign.c
clang: numpy/core/src/common/ufunc_override.c
clang: numpy/core/src/common/npy_longdouble.c
clang: numpy/core/src/common/numpyos.c
clang: numpy/core/src/common/ucsnarrow.c
1 warning generated.
clang: numpy/core/src/umath/extobj.c
numpy/core/src/common/ucsnarrow.c:139:34: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE*)buf,
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
1 warning generated.
clang: numpy/core/src/common/python_xerbla.c
clang: numpy/core/src/common/cblasfuncs.c
clang: /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c
In file included from /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c:26:
In file included from numpy/core/include/numpy/arrayobject.h:4:
In file included from numpy/core/include/numpy/ndarrayobject.h:21:
build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h:1463:1: warning: unused function '_import_array' [-Wunused-function]
_import_array(void)
^
1 warning generated.
17 warnings generated.
clang: numpy/core/src/umath/ufunc_type_resolution.c
4 warnings generated.
4 warnings generated.
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/alloc.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arrayobject.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_scalar.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_array.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/buffer.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/calculation.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/compiled_base.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/common.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert_datatype.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/conversion_utils.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/ctors.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_strings.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busday.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busdaycal.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/descriptor.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dragon4.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dtype_transfer.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/flagsobject.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/getset.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/hashdescr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/item_selection.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/iterators.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/mapping.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/methods.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/multiarraymodule.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_api.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_constr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_pywrap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/number.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/refcount.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/sequence.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/shape.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalarapi.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/strfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/temp_elide.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/typeinfo.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/usertypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/vdot.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/umathmodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/reduction.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_object.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/extobj.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/cpuid.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_type_resolution.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/halffloat.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/array_assign.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/npy_longdouble.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ucsnarrow.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ufunc_override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/numpyos.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/cblasfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/python_xerbla.o build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -lnpysort -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_umath.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.core._umath_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_umath_tests.cpython-39-darwin.so
building 'numpy.core._rational_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_rational_tests.cpython-39-darwin.so
building 'numpy.core._struct_ufunc_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_struct_ufunc_tests.cpython-39-darwin.so
building 'numpy.core._operand_flag_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_operand_flag_tests.cpython-39-darwin.so
building 'numpy.fft.fftpack_lite' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/fft
compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/fft/fftpack_litemodule.c
clang: numpy/fft/fftpack.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_lite.cpython-39-darwin.so
building 'numpy.linalg.lapack_lite' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg
creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite
compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: numpy/linalg/lapack_litemodule.c
clang: numpy/linalg/lapack_lite/python_xerbla.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.linalg._umath_linalg' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg
compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c
numpy/linalg/umath_linalg.c.src:735:32: warning: unknown warning group '-Wmaybe-uninitialized', ignored [-Wunknown-warning-option]
#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
^
numpy/linalg/umath_linalg.c.src:541:1: warning: unused function 'dump_ufunc_object' [-Wunused-function]
dump_ufunc_object(PyUFuncObject* ufunc)
^
numpy/linalg/umath_linalg.c.src:566:1: warning: unused function 'dump_linearize_data' [-Wunused-function]
dump_linearize_data(const char* name, const LINEARIZE_DATA_t* params)
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_FLOAT_matrix' [-Wunused-function]
dump_FLOAT_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_DOUBLE_matrix' [-Wunused-function]
dump_DOUBLE_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CFLOAT_matrix' [-Wunused-function]
dump_CFLOAT_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CDOUBLE_matrix' [-Wunused-function]
dump_CDOUBLE_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_FLOAT_matrix' [-Wunused-function]
zero_FLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_DOUBLE_matrix' [-Wunused-function]
zero_DOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CFLOAT_matrix' [-Wunused-function]
zero_CFLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CDOUBLE_matrix' [-Wunused-function]
zero_CDOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:1862:1: warning: unused function 'dump_geev_params' [-Wunused-function]
dump_geev_params(const char *name, GEEV_PARAMS_t* params)
^
numpy/linalg/umath_linalg.c.src:2132:1: warning: unused function 'init_cgeev' [-Wunused-function]
init_cgeev(GEEV_PARAMS_t* params,
^
numpy/linalg/umath_linalg.c.src:2213:1: warning: unused function 'process_cgeev_results' [-Wunused-function]
process_cgeev_results(GEEV_PARAMS_t *NPY_UNUSED(params))
^
numpy/linalg/umath_linalg.c.src:2376:1: warning: unused function 'dump_gesdd_params' [-Wunused-function]
dump_gesdd_params(const char *name,
^
numpy/linalg/umath_linalg.c.src:2864:1: warning: unused function 'dump_gelsd_params' [-Wunused-function]
dump_gelsd_params(const char *name,
^
16 warnings generated.
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/_umath_linalg.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.random.mtrand' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/random
creating build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand
compile options: '-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/random/mtrand/mtrand.c
clang: numpy/random/mtrand/initarray.cclang: numpy/random/mtrand/randomkit.c
clang: numpy/random/mtrand/distributions.c
numpy/random/mtrand/mtrand.c:40400:34: error: no member named 'tp_print' in 'struct _typeobject'
__pyx_type_6mtrand_RandomState.tp_print = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:22: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
12 warnings and 1 error generated.
error: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c numpy/random/mtrand/mtrand.c -o build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o -MMD -MF build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o.d" failed with exit status 1 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1696/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6393/comments | https://api.github.com/repos/huggingface/datasets/issues/6393/events | https://github.com/huggingface/datasets/issues/6393 | 1,984,913,259 | I_kwDODunzps52T19r | 6,393 | Filter occasionally hangs | {
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dakinggg",
"id": 43149077,
"login": "dakinggg",
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dakinggg"
} | [] | open | false | null | [] | null | [
"It looks like I may not be the first to encounter this: https://github.com/huggingface/datasets/issues/3172",
"Adding some more information, it seems to occur more frequently with large (millions of samples) datasets.",
"More information. My code is structured as (1) load (2) map (3) filter (4) filter. It was always the second filter that failed. Combining the two filters into one seems to reliably work.",
"@lhoestq it'd be great if someone had a chance to look at this. I suspect it is impacting many users given the other issue that I linked.",
"Hi ! Sorry for the late response. Was it happening after the first or the second filter ?\r\n\r\nIt looks like an issue with the garbage collector (which makes it random). Maybe datasets created with `filter` are not always handled properly ? cc @mariosasko",
"It was after the second filter (and combining the two filters into one seemingly resolved it). I obviously haven't tried all settings to know that these details are causal, but it did work for me.",
"Thanks, that's good to know.\r\n\r\nThe stacktrace suggests an issue when `del self._indices` is called, which happens when a filtered dataset falls out of scope. The indices are a PyArrow table memory mapped from disk, so I'm not quite sure how calling `del` on it can cause this issue. We do `del self._indices` to make sure the file on disk is not used anymore by the current process and avoid e.g. permission errors.\r\n\r\nHopefully we can find a way to reproduce this error, otherwise it will be quite hard to understand what happened",
"Yeah, I have a reliable repro, but it is not even close to minimal and uses a dataset I can't share. Perhaps you could try getting close to my setting.\r\n\r\n(1) make a large (~20GB) jsonl with prompt/response pairs\r\n(2) load it on a linux machine (`dataset = load_dataset(...)`)\r\n(3) map a tokenizer to it, with multiprocessing (`tokenized_dataset = dataset.map(...)`)\r\n(4) filter it once based on something, with multiprocessing (`filtered_1 = tokenized_dataset.filter(...)`)\r\n(5) filter it again based on something, with multiprocessing (`filtered_2 = filtered_1.filter(...)`)\r\n\r\nI included the variable names just in case it is relevant that I was creating new datasets each time, not overwriting the same variable."
] | "2023-11-09T06:18:30Z" | "2023-11-21T17:39:26Z" | null | NONE | null | null | null | ### Describe the bug
A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm)
There is a trace produced
```
Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", line 1366, in __del__
if hasattr(self, "_indices"):
File "/usr/lib/python3/dist-packages/composer/core/engine.py", line 123, in sigterm_handler
sys.exit(128 + signal)
SystemExit: 143
```
but I'm not sure if the trace is actually from `datasets`, or from surrounding code that is trying to clean up after datasets gets stuck.
Unfortunately I can't reproduce this issue anywhere close to reliably. It happens infrequently when using `num_procs > 1`. Anecdotally I started seeing it when using larger datasets (~10M samples).
### Steps to reproduce the bug
N/A see description
### Expected behavior
map/filter calls always complete sucessfully
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6393/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3215/comments | https://api.github.com/repos/huggingface/datasets/issues/3215/events | https://github.com/huggingface/datasets/pull/3215 | 1,045,011,207 | PR_kwDODunzps4uGx4o | 3,215 | Small updates to to_tf_dataset documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
} | [] | closed | false | null | [] | null | [
"@stevhliu Accepted both suggestions, thanks for the review!"
] | "2021-11-04T17:22:01Z" | "2021-11-04T18:55:38Z" | "2021-11-04T18:55:37Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3215.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3215",
"merged_at": "2021-11-04T18:55:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3215.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3215"
} | I added a little more description about `to_tf_dataset` compared to just setting the format | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3215/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4994/comments | https://api.github.com/repos/huggingface/datasets/issues/4994/events | https://github.com/huggingface/datasets/issues/4994 | 1,379,084,015 | I_kwDODunzps5SMybv | 4,994 | delete the hardcoded license list in `datasets` | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2022-09-20T09:14:41Z" | "2022-09-22T11:45:47Z" | "2022-09-22T11:45:47Z" | MEMBER | null | null | null | > Feel free to delete the license list in `datasets` [...]
>
> Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_
> [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now?
_Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_ | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4994/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2643/comments | https://api.github.com/repos/huggingface/datasets/issues/2643/events | https://github.com/huggingface/datasets/issues/2643 | 944,220,273 | MDU6SXNzdWU5NDQyMjAyNzM= | 2,643 | Enum used in map functions will raise a RecursionError with dill. | {
"avatar_url": "https://avatars.githubusercontent.com/u/100702?v=4",
"events_url": "https://api.github.com/users/jorgeecardona/events{/privacy}",
"followers_url": "https://api.github.com/users/jorgeecardona/followers",
"following_url": "https://api.github.com/users/jorgeecardona/following{/other_user}",
"gists_url": "https://api.github.com/users/jorgeecardona/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jorgeecardona",
"id": 100702,
"login": "jorgeecardona",
"node_id": "MDQ6VXNlcjEwMDcwMg==",
"organizations_url": "https://api.github.com/users/jorgeecardona/orgs",
"received_events_url": "https://api.github.com/users/jorgeecardona/received_events",
"repos_url": "https://api.github.com/users/jorgeecardona/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jorgeecardona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorgeecardona/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jorgeecardona"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"I'm running into this as well. (Thank you so much for reporting @jorgeecardona — was staring at this massive stack trace and unsure what exactly was wrong!)",
"Hi ! Thanks for reporting :)\r\n\r\nUntil this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.py\r\nThere is already a suggestion in this message about how to do it:\r\nhttps://github.com/uqfoundation/dill/issues/250#issuecomment-852566284\r\n\r\nLet me know if such a workaround could help, and feel free to open a PR if you want to contribute !",
"I have the same bug.\r\nthe code is as follows:\r\n![image](https://user-images.githubusercontent.com/84262181/139785849-620dd4ac-86ce-4212-8163-942bbca305aa.png)\r\nthe error is: \r\n![image](https://user-images.githubusercontent.com/84262181/139785899-88a9bd75-c60b-45a5-b819-830c7c096f3d.png)\r\n\r\nLook for the solution for this bug.",
"Hi ! I think your RecursionError comes from a different issue @BitcoinNLPer , could you open a separate issue please ?\r\n\r\nAlso which dataset are you using ? I tried loading `CodedotAI/code_clippy` but I get a different error\r\n```python\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 1615, in load_dataset\r\n **config_kwargs,\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 1446, in load_dataset_builder\r\n builder_cls = import_main_class(dataset_module.module_path)\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 101, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"/Users/quentinlhoest/.virtualenvs/hf-datasets/lib/python3.7/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/Users/quentinlhoest/.cache/huggingface/modules/datasets_modules/datasets/CodedotAI___code_clippy/d332f69d036e8c80f47bc9a96d676c3fa30cb50af7bb81e2d4d12e80b83efc4d/code_clippy.py\", line 66, in <module>\r\n url_elements = results.find_all(\"a\")\r\nAttributeError: 'NoneType' object has no attribute 'find_all'\r\n```"
] | "2021-07-14T09:16:08Z" | "2021-11-02T09:51:11Z" | null | NONE | null | null | null | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from enum import Enum
class A(Enum):
a = 'a'
def main():
a = A.a
def f(x):
return {} if a == a.a else x
ds = load_dataset('cnn_dailymail', '3.0.0')['test']
ds = ds.map(f, num_proc=15)
if __name__ == "__main__":
main()
```
## Expected results
The known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood.
## Actual results
```python
File "/home/xxxx/miniconda3/lib/python3.8/site-packages/dill/_dill.py", line 1373, in save_type
pickler.save_reduce(_create_type, (type(obj), obj.__name__,
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 534, in save
self.framer.commit_frame()
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 220, in commit_frame
if f.tell() >= self._FRAME_SIZE_TARGET or force:
RecursionError: maximum recursion depth exceeded while calling a Python object
```
## Environment info
- `datasets` version: 1.8.0
- Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2643/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2643/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6272/comments | https://api.github.com/repos/huggingface/datasets/issues/6272/events | https://github.com/huggingface/datasets/issues/6272 | 1,920,831,487 | I_kwDODunzps5yfY__ | 6,272 | Duplicate `data_files` when named `<split>/<split>.parquet` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Also reported in https://github.com/huggingface/datasets/issues/6259",
"I think it's best to drop duplicates with a `set` (as a temporary fix) and improve the patterns when/if https://github.com/fsspec/filesystem_spec/pull/1382 gets merged. @lhoestq Do you have some other ideas?",
"Alternatively we could just use this no ?\r\n\r\n```python\r\nif config.FSSPEC_VERSION < version.parse(\"2023.9.0\"):\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**[{sep}]{keyword}[{sep}/]**\",\r\n \"**/{keyword}[{sep}/]**\",\r\n ]\r\nelse:\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**/*[{sep}]{keyword}[{sep}/]**\",\r\n \"**/*/{keyword}[{sep}/]**\",\r\n ]\r\n```\r\n\r\nThis way no need to implement sets, which would require a bit of work since we've always considered a list of pattern to be resolved as the concatenated list of resolved files for each pattern (including duplicates)\r\n",
"Arf `\"**/*/{keyword}[{sep}/]**\"` does return `data/keyword.txt` in latest `fsspec` but not in `glob.glob`\r\n\r\nEDIT: actually forgot to set `recursive=True`",
"Actually `glob.glob` does return it with `recursive=True` ! my bad",
"Pff just tested and my idea sucks, pattern 1 and 3 obviously give duplicates ",
"> I think it's best to drop duplicates with a set (as a temporary fix)\r\n\r\nI started https://github.com/huggingface/datasets/pull/6278 to use DataFilesSet objects instead of DataFilesList"
] | "2023-10-01T15:43:56Z" | "2023-10-05T10:32:27Z" | null | MEMBER | null | null | null | e.g. with `u23429/stock_1_minute_ticker`
```ipython
In [1]: from datasets import *
In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker")
Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s]
In [3]: b.config.data_files
Out[3]:
{NamedSplit('train'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet'],
NamedSplit('validation'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet'],
NamedSplit('test'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet']}
```
This bug issue is present in the current `datasets` 2.14.5 and also on `main` even after https://github.com/huggingface/datasets/pull/6244 cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6272/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6272/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4939/comments | https://api.github.com/repos/huggingface/datasets/issues/4939/events | https://github.com/huggingface/datasets/pull/4939 | 1,363,468,679 | PR_kwDODunzps4-cw4A | 4,939 | Fix NonMatchingChecksumError in adv_glue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-09-06T15:31:16Z" | "2022-09-06T17:42:10Z" | "2022-09-06T17:39:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4939.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4939",
"merged_at": "2022-09-06T17:39:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4939.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4939"
} | Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4939/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/880/comments | https://api.github.com/repos/huggingface/datasets/issues/880/events | https://github.com/huggingface/datasets/issues/880 | 748,949,606 | MDU6SXNzdWU3NDg5NDk2MDY= | 880 | Add SQA | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ",
"@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings:\r\n\r\n```\r\nimport pandas as pd\r\nimport ast\r\n\r\ndata = pd.read_csv(\"/content/sqa_data/random-split-1-dev.tsv\", sep='\\t')\r\n\r\ndef _parse_answer_coordinates(answer_coordinate_str):\r\n \"\"\"Parses the answer_coordinates of a question.\r\n Args:\r\n answer_coordinate_str: A string representation of a Python list of tuple\r\n strings.\r\n For example: \"['(1, 4)','(1, 3)', ...]\"\r\n \"\"\"\r\n\r\n try:\r\n answer_coordinates = []\r\n # make a list of strings\r\n coords = ast.literal_eval(answer_coordinate_str)\r\n # parse each string as a tuple\r\n for row_index, column_index in sorted(\r\n ast.literal_eval(coord) for coord in coords):\r\n answer_coordinates.append((row_index, column_index))\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_coordinate_str)\r\n \r\n return answer_coordinates\r\n\r\n\r\ndef _parse_answer_text(answer_text):\r\n \"\"\"Populates the answer_texts field of `answer` by parsing `answer_text`.\r\n Args:\r\n answer_text: A string representation of a Python list of strings.\r\n For example: \"[u'test', u'hello', ...]\"\r\n \"\"\"\r\n try:\r\n answer = []\r\n for value in ast.literal_eval(answer_text):\r\n answer.append(value)\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_text)\r\n\r\n return answer\r\n\r\ndata['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))\r\ndata['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))\r\n```\r\n\r\nHere I'm using Pandas to read in one of the TSV files (the dev set). \r\n\r\n",
"Closing since SQA was added in #1566 "
] | "2020-11-23T16:31:55Z" | "2020-12-23T13:58:24Z" | "2020-12-23T13:58:23Z" | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/
- **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253
- **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71).
Note 1: this dataset actually consists of 2 types of files:
1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test)
2) a folder of csv files, which contain the actual tabular data
Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub.
Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/880/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/880/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2660/comments | https://api.github.com/repos/huggingface/datasets/issues/2660/events | https://github.com/huggingface/datasets/pull/2660 | 946,316,180 | MDExOlB1bGxSZXF1ZXN0NjkxNTA4NzE0 | 2,660 | Move checks from _map_single to map | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"@lhoestq This one has been open for a while. Could you please take a look?",
"@lhoestq Ready for the final review!",
"I forgot to update the signature of `DatasetDict.map`, so did that now."
] | "2021-07-16T13:53:33Z" | "2021-09-06T14:12:23Z" | "2021-09-06T14:12:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2660",
"merged_at": "2021-09-06T14:12:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2660"
} | The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2660/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2573/comments | https://api.github.com/repos/huggingface/datasets/issues/2573/events | https://github.com/huggingface/datasets/issues/2573 | 934,584,745 | MDU6SXNzdWU5MzQ1ODQ3NDU= | 2,573 | Finding right block-size with JSON loading difficult for user | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user"
] | "2021-07-01T08:48:35Z" | "2021-07-01T19:10:53Z" | null | MEMBER | null | null | null | As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets
> json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2573/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5093/comments | https://api.github.com/repos/huggingface/datasets/issues/5093/events | https://github.com/huggingface/datasets/issues/5093 | 1,402,939,660 | I_kwDODunzps5TnykM | 5,093 | Mismatch between tutoriel and doc | {
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco"
}
] | null | [
"Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).",
"Can I work on this?",
"Fixed in https://github.com/huggingface/datasets/pull/5095"
] | "2022-10-10T10:23:53Z" | "2022-10-10T17:51:15Z" | "2022-10-10T17:51:14Z" | MEMBER | null | null | null | ## Describe the bug
In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work.
## Steps to reproduce the bug
MWE:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt")
```
## Expected results
return_tensors to be a valid kwarg :smiley:
## Actual results
```python
>> TypeError: map() got an unexpected keyword argument 'return_tensors'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5093/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/166/comments | https://api.github.com/repos/huggingface/datasets/issues/166/events | https://github.com/huggingface/datasets/issues/166 | 620,850,218 | MDU6SXNzdWU2MjA4NTAyMTg= | 166 | Add a method to shuffle a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | [
"+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)",
"+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster than do shuffle in dataset, especially when doing shuffle every epoch.\r\n\r\nAlso +1 for the naming convention.",
"As you might already know the issue of dataset shuffling came up in the nlp code [walkthrough](https://youtu.be/G3pOvrKkFuk?t=3204) by Yannic Kilcher\r\n",
"We added the `.shuffle` method :)\r\n\r\nClosing this one."
] | "2020-05-19T10:08:46Z" | "2020-06-23T15:07:33Z" | "2020-06-23T15:07:32Z" | MEMBER | null | null | null | Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method.
Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/166/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5350/comments | https://api.github.com/repos/huggingface/datasets/issues/5350/events | https://github.com/huggingface/datasets/pull/5350 | 1,487,559,904 | PR_kwDODunzps5E8y2E | 5,350 | Clean up Loading methods docstrings | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-12-09T22:25:30Z" | "2022-12-12T17:27:20Z" | "2022-12-12T17:24:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5350.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5350",
"merged_at": "2022-12-12T17:24:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5350.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5350"
} | Clean up for the docstrings in Loading methods! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5350/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5350/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/455/comments | https://api.github.com/repos/huggingface/datasets/issues/455/events | https://github.com/huggingface/datasets/pull/455 | 668,037,965 | MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw | 455 | Add bleurt | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [
"Sorry one nit: Could we use named arguments for the call to BLEURT?\r\n\r\ni.e. \r\n scores = self.scorer.score(references=references, candidates=predictions)\r\n\r\n(i.e. so it is less bug prone)\r\n",
"Following up on Ankur's comment---we are going to drop support for\npositional (not named) arguments in the future releases because it seems to\ncause bugs and confusion. I hope it doesn't create too much of a mess.\n\nLe jeu. 30 juil. 2020 à 10:44, ankparikh <[email protected]> a\nécrit :\n\n> Sorry one nit: Could we use named arguments for the call to BLEURT?\n>\n> i.e.\n> scores = self.scorer.score(references=references, candidates=predictions)\n>\n> (i.e. so it is less bug prone)\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/pull/455#issuecomment-666414514>, or\n> unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA>\n> .\n>\n",
"> Following up on Ankur's comment---we are going to drop support for positional (not named) arguments in the future releases because it seems to cause bugs and confusion. I hope it doesn't create too much of a mess. Le jeu. 30 juil. 2020 à 10:44, ankparikh <[email protected]> a écrit :\r\n> […](#)\r\n> Sorry one nit: Could we use named arguments for the call to BLEURT? i.e. scores = self.scorer.score(references=references, candidates=predictions) (i.e. so it is less bug prone) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#455 (comment)](https://github.com/huggingface/nlp/pull/455#issuecomment-666414514)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA> .\r\n\r\nChanged @ankparikh @tsellam, thanks for taking a look!",
"We should avoid positional arguments in metrics on our side as well. It's a dangerous source of errors indeed."
] | "2020-07-29T18:08:32Z" | "2020-07-31T13:56:14Z" | "2020-07-31T13:56:14Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/455.diff",
"html_url": "https://github.com/huggingface/datasets/pull/455",
"merged_at": "2020-07-31T13:56:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/455.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/455"
} | This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/455/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/455/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3287/comments | https://api.github.com/repos/huggingface/datasets/issues/3287/events | https://github.com/huggingface/datasets/pull/3287 | 1,056,079,724 | PR_kwDODunzps4upsWR | 3,287 | Add The Pile dataset and PubMed Central subset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-11-17T12:35:58Z" | "2021-12-01T15:29:08Z" | "2021-12-01T15:29:07Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3287.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3287",
"merged_at": "2021-12-01T15:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3287.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3287"
} | Add:
- The complete final version of The Pile dataset: "all" config
- PubMed Central subset of The Pile: "pubmed_central" config
Close #1675, close bigscience-workshop/data_tooling#74.
CC: @StellaAthena, @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3287/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4155/comments | https://api.github.com/repos/huggingface/datasets/issues/4155/events | https://github.com/huggingface/datasets/pull/4155 | 1,202,183,608 | PR_kwDODunzps42Hqam | 4,155 | Make HANS dataset streamable | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-12T17:34:13Z" | "2022-04-13T12:03:46Z" | "2022-04-13T11:57:35Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4155",
"merged_at": "2022-04-13T11:57:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4155"
} | Fix #4133 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4155/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2838/comments | https://api.github.com/repos/huggingface/datasets/issues/2838/events | https://github.com/huggingface/datasets/pull/2838 | 980,067,186 | MDExOlB1bGxSZXF1ZXN0NzIwMzcxMDk5 | 2,838 | Add error_bad_chunk to the JSON loader | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"Somebody reported the following error message which I think this is related to the goal of this PR:\r\n```Python\r\n03/24/2022 02:19:45 - INFO - __main__ - Step 5637: {'lr': 0.00018773333333333333, 'samples': 360768, 'batch_offset': 5637, 'completed_steps': 704, 'loss/train': 4.473083972930908, 'tokens/s': 6692.6176452714235}\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:09 - ERROR - datasets.packaged_modules.json.json - Failed to read file 'gzip://file-000000000007.json::https://huggingface.co/datasets/lvwerra/codeparrot-clean-train/resolve/1d740acb9d09cf7a3307553323e2c677a6535407/file-000000000007.json.gz' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0\r\n03/24/2022 02:20:24 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:37 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:44 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [3/20]\r\n03/24/2022 02:20:49 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [4/20]\r\n03/24/2022 02:20:54 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [5/20]\r\n03/24/2022 02:20:59 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [6/20]\r\n03/24/2022 02:21:12 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [7/20]\r\n03/24/2022 02:21:20 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [8/20]\r\n03/24/2022 02:21:25 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [9/20]\r\n03/24/2022 02:21:30 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [10/20]\r\n03/24/2022 02:21:36 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [11/20]\r\n03/24/2022 02:21:41 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [12/20]\r\n03/24/2022 02:21:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [13/20]\r\n03/24/2022 02:21:51 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [14/20]\r\n03/24/2022 02:21:56 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [15/20]\r\n03/24/2022 02:22:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [16/20]\r\n03/24/2022 02:22:12 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [17/20]\r\n03/24/2022 02:22:21 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [18/20]\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py\", line 119, in _generate_tables\r\n io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0\r\n```\r\nThis comes from the CodeParrot training script where streaming is used. When the connection fails it can happen that the JSON cannot be read anymore and then an error is thrown.\r\n\r\n",
"Yea if streaming makes a JSON unreadable then `error_bad_chunk` would help by skipping all the bad JSON data",
"Should we close this PR?",
"I didn't continue this PR but I think it's valuable (though now I think it would be better to have multiple options: raise, warn or ignore errors). I'll continue it at one point"
] | "2021-08-26T10:07:32Z" | "2023-09-25T09:06:42Z" | null | MEMBER | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2838.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2838",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2838.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2838"
} | Add the `error_bad_chunk` parameter to the JSON loader.
Setting `error_bad_chunk=False` allows to skip an unparsable chunk of JSON data without raising an error.
Additional note:
In case of an unparsable JSON chunk, the JSON loader no longer tries to load the full JSON (which could take a lot of time in streaming mode) to get the JSON fields that the user may have forgotten to pass. Ex : for squad-like data, the user has to pass `field="data"` to tell the loader to get the list of examples from this field.
TODO: update docs
cc @lvwerra | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2838/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/12 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/12/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/12/comments | https://api.github.com/repos/huggingface/datasets/issues/12/events | https://github.com/huggingface/datasets/pull/12 | 604,518,583 | MDExOlB1bGxSZXF1ZXN0NDA3MDk3MzA4 | 12 | [Map Function] add assert statement if map function does not return dict or None | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"Also added to an assert statement that if a dict is returned by function, all values of `dicts` are `lists`",
"Wait to merge until `make style` is set in place.",
"Updated the assert statements. Played around with multiple cases and it should be good now IMO. "
] | "2020-04-22T07:21:24Z" | "2022-10-04T09:31:53Z" | "2020-04-24T06:29:03Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/12.diff",
"html_url": "https://github.com/huggingface/datasets/pull/12",
"merged_at": "2020-04-24T06:29:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/12.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/12"
} | IMO, if a function is provided that is not a print statement (-> returns variable of type `None`) or a function that updates the datasets (-> returns variable of type `dict`), then a `TypeError` should be raised.
Not sure whether you had cases in mind where the user should do something else @thomwolf , but I think a lot of silent errors can be avoided with this assert statement. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/12/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/12/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1293/comments | https://api.github.com/repos/huggingface/datasets/issues/1293/events | https://github.com/huggingface/datasets/pull/1293 | 759,360,113 | MDExOlB1bGxSZXF1ZXN0NTM0Mzc4OTQ0 | 1,293 | add hrenwac_para | {
"avatar_url": "https://avatars.githubusercontent.com/u/51969305?v=4",
"events_url": "https://api.github.com/users/ivan-zidov/events{/privacy}",
"followers_url": "https://api.github.com/users/ivan-zidov/followers",
"following_url": "https://api.github.com/users/ivan-zidov/following{/other_user}",
"gists_url": "https://api.github.com/users/ivan-zidov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ivan-zidov",
"id": 51969305,
"login": "ivan-zidov",
"node_id": "MDQ6VXNlcjUxOTY5MzA1",
"organizations_url": "https://api.github.com/users/ivan-zidov/orgs",
"received_events_url": "https://api.github.com/users/ivan-zidov/received_events",
"repos_url": "https://api.github.com/users/ivan-zidov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ivan-zidov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivan-zidov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ivan-zidov"
} | [] | closed | false | null | [] | null | [] | "2020-12-08T11:16:41Z" | "2020-12-08T11:34:47Z" | "2020-12-08T11:34:38Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1293.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1293",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1293.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1293"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1293/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/5304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5304/comments | https://api.github.com/repos/huggingface/datasets/issues/5304/events | https://github.com/huggingface/datasets/issues/5304 | 1,465,110,367 | I_kwDODunzps5XU89f | 5,304 | timit_asr doesn't load the test split. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17842800?v=4",
"events_url": "https://api.github.com/users/seyong92/events{/privacy}",
"followers_url": "https://api.github.com/users/seyong92/followers",
"following_url": "https://api.github.com/users/seyong92/following{/other_user}",
"gists_url": "https://api.github.com/users/seyong92/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/seyong92",
"id": 17842800,
"login": "seyong92",
"node_id": "MDQ6VXNlcjE3ODQyODAw",
"organizations_url": "https://api.github.com/users/seyong92/orgs",
"received_events_url": "https://api.github.com/users/seyong92/received_events",
"repos_url": "https://api.github.com/users/seyong92/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/seyong92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seyong92/subscriptions",
"type": "User",
"url": "https://api.github.com/users/seyong92"
} | [] | closed | false | null | [] | null | [
"The [timit_asr.py](https://huggingface.co/datasets/timit_asr/blob/main/timit_asr.py) script iterates over the WAV files per split directory using this:\r\n```python\r\nwav_paths = sorted(Path(data_dir).glob(f\"**/{split}/**/*.wav\"))\r\nwav_paths = wav_paths if wav_paths else sorted(Path(data_dir).glob(f\"**/{split.upper()}/**/*.WAV\"))\r\n```\r\n\r\nCan you check that there is a directory named \"test\" somewhere in your timit data directory ?"
] | "2022-11-26T10:18:22Z" | "2023-02-10T16:33:21Z" | "2023-02-10T16:33:21Z" | NONE | null | null | null | ### Describe the bug
When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split.
I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all.
```python
DatasetDict({
train: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 4620
})
test: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 0
})
})
```
The directory structure of both splits are same. (DIALECT_REGION / SPEAKER_CODE / DATA_FILES)
### Steps to reproduce the bug
1. just use ```timit = load_dataset('timit_asr', data_dir=data_dir)```
### Expected behavior
```python
DatasetDict({
train: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 4620
})
test: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 1680
})
})
```
### Environment info
- ubuntu 20.04
- python 3.9.13
- datasets 2.7.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5304/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5021/comments | https://api.github.com/repos/huggingface/datasets/issues/5021/events | https://github.com/huggingface/datasets/issues/5021 | 1,385,351,250 | I_kwDODunzps5SkshS | 5,021 | Split is inferred from filename and overrides metadata.jsonl | {
"avatar_url": "https://avatars.githubusercontent.com/u/102226344?v=4",
"events_url": "https://api.github.com/users/float-trip/events{/privacy}",
"followers_url": "https://api.github.com/users/float-trip/followers",
"following_url": "https://api.github.com/users/float-trip/following{/other_user}",
"gists_url": "https://api.github.com/users/float-trip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/float-trip",
"id": 102226344,
"login": "float-trip",
"node_id": "U_kgDOBhfZqA",
"organizations_url": "https://api.github.com/users/float-trip/orgs",
"received_events_url": "https://api.github.com/users/float-trip/received_events",
"repos_url": "https://api.github.com/users/float-trip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/float-trip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/float-trip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/float-trip"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files=\"dataset/**\")\r\n```",
"Thanks! Specifying `data_files` worked for that case.\r\n\r\nI'm new to the library, so let me try rephrasing the issue. If there's no actual bug here, sorry for the trouble.\r\n\r\nI've uploaded an example [here](https://files.catbox.moe/nfj2pd.zip) with the following files: \r\n\r\n```\r\n.\r\n├── bug.py\r\n└── imagefolder\r\n ├── test\r\n │ ├── metadata.jsonl\r\n │ ├── dog.jpg\r\n │ └── personal trainer.jpg\r\n └── train\r\n ├── metadata.jsonl\r\n ├── cat.jpg\r\n └── testing center.jpg\r\n```\r\n\r\n`bug.py`\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\")\r\n\r\nprint(dataset)\r\n# DatasetDict({\r\n# test: Dataset({\r\n# features: ['image', 'text'],\r\n# num_rows: 1\r\n# })\r\n# })\r\n\r\nfor split in dataset:\r\n print(\"Split:\", split)\r\n for n in dataset[split]:\r\n print(n['text'])\r\n\r\n\r\n# Split: test\r\n# testing center\r\n```\r\n\r\nAs far as I can tell, this conforms with the example given here: https://huggingface.co/docs/datasets/image_dataset#imagefolder. It appears to me that, even though `metadata.jsonl` is present, the inferred labels from the path are taking precedent. Does this sound like a bug/undocumented behavior?",
"This looks like a duplicate of https://github.com/huggingface/datasets/issues/4895 (the problem is explained in this comment: https://github.com/huggingface/datasets/issues/4895#issuecomment-1248269550).\r\n\r\nIn the meantime, you can do the following to fetch all the splits:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files={\"train\": \"imagefolder/train/**\", \"test\": \"imagefolder/test/**\"})\r\n```\r\n"
] | "2022-09-26T03:22:14Z" | "2022-09-29T08:07:50Z" | "2022-09-29T08:07:50Z" | NONE | null | null | null | ## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce the bug
`metadata.jsonl`
```json
{"file_name": "photo of a cat.jpg", "text": "a photo of a cat"}
{"file_name": "photo of a dog.jpg", "text": "a photo of a dog"}
{"file_name": "photo of a train.jpg", "text": "a photo of a train"}
{"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"}
```
`bug.py`
```python
from datasets import load_dataset
dataset = load_dataset("dataset")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# test: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# })
for split in dataset:
for n in dataset[split]:
print(n['text'])
# a photo of a train
# a photo of test tubes
```
## Expected results
One single dataset with all four images / a warning for unused files / documentation of this behavior
## Actual results
Only the images with "test" or "train" in the name are loaded
## Environment info
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5021/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2788/comments | https://api.github.com/repos/huggingface/datasets/issues/2788/events | https://github.com/huggingface/datasets/issues/2788 | 967,149,389 | MDU6SXNzdWU5NjcxNDkzODk= | 2,788 | How to sample every file in a list of files making up a split in a dataset when loading? | {
"avatar_url": "https://avatars.githubusercontent.com/u/11220949?v=4",
"events_url": "https://api.github.com/users/brijow/events{/privacy}",
"followers_url": "https://api.github.com/users/brijow/followers",
"following_url": "https://api.github.com/users/brijow/following{/other_user}",
"gists_url": "https://api.github.com/users/brijow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brijow",
"id": 11220949,
"login": "brijow",
"node_id": "MDQ6VXNlcjExMjIwOTQ5",
"organizations_url": "https://api.github.com/users/brijow/orgs",
"received_events_url": "https://api.github.com/users/brijow/received_events",
"repos_url": "https://api.github.com/users/brijow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brijow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brijow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brijow"
} | [] | closed | false | null | [] | null | [
"Hi ! This is not possible just with `load_dataset`.\r\n\r\nYou can do something like this instead:\r\n```python\r\nseed=42\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\r\n \"csv\",\r\n data_files=data_files_dict,\r\n).shuffle(seed=seed)\r\n\r\nsample_dataset = {splitname: split.select(range(8)) for splitname, split in dataset.items()}\r\n```\r\n\r\nAnother alternative is loading each file separately with `split=\"train[:8]\"` and then use `concatenate_datasets` to merge the sample of each file."
] | "2021-08-11T17:43:21Z" | "2023-07-25T17:40:50Z" | "2023-07-25T17:40:50Z" | NONE | null | null | null | I am loading a dataset with multiple train, test, and validation files like this:
```
data_files_dict = {
"train": [train_file1, train_file2],
"test": [test_file1, test_file2],
"val": [val_file1, val_file2]
}
dataset = datasets.load_dataset(
"csv",
data_files=data_files_dict,
split=['train[:8]', 'test[:8]', 'val[:8]']
)
```
However, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists.
I'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split.
Is this type of splitting supported? If so, how can I do it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2788/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2788/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/419/comments | https://api.github.com/repos/huggingface/datasets/issues/419/events | https://github.com/huggingface/datasets/pull/419 | 661,974,747 | MDExOlB1bGxSZXF1ZXN0NDUzNTgxNzQz | 419 | EmoContext dataset add | {
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordtt13",
"id": 35500534,
"login": "lordtt13",
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordtt13"
} | [] | closed | false | null | [] | null | [] | "2020-07-20T15:48:45Z" | "2020-07-24T08:22:01Z" | "2020-07-24T08:22:00Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/419.diff",
"html_url": "https://github.com/huggingface/datasets/pull/419",
"merged_at": "2020-07-24T08:22:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/419.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/419"
} | EmoContext Dataset add
Signed-off-by: lordtt13 <[email protected]> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/419/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/419/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/467/comments | https://api.github.com/repos/huggingface/datasets/issues/467/events | https://github.com/huggingface/datasets/pull/467 | 671,580,010 | MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy | 467 | DOCS: Fix typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharatr21",
"id": 13381361,
"login": "bharatr21",
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharatr21"
} | [] | closed | false | null | [] | null | [
"Thanks!"
] | "2020-08-02T08:59:37Z" | "2020-08-02T13:52:27Z" | "2020-08-02T09:18:54Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/467",
"merged_at": "2020-08-02T09:18:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/467"
} | Fix typo from dictionnary -> dictionary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/467/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2817/comments | https://api.github.com/repos/huggingface/datasets/issues/2817/events | https://github.com/huggingface/datasets/pull/2817 | 974,486,051 | MDExOlB1bGxSZXF1ZXN0NzE1NzgzMDQ3 | 2,817 | Rename The Pile subsets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Sounds good. Should we also have a “the_pile” dataset with the subsets as configuration?",
"I think the main `the_pile` datasets will be the one that is the mix of all the subsets: https://the-eye.eu/public/AI/pile/\r\n\r\nWe can also add configurations for each subset, and even allow users to specify the subsets they want:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"the_pile\", subsets=[\"openwebtext2\", \"books3\", \"hn\"])\r\n```\r\n\r\nWe're alrady doing something similar for mC4, where users can specify the list of languages they want to load."
] | "2021-08-19T09:56:22Z" | "2021-08-23T16:24:10Z" | "2021-08-23T16:24:09Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2817.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2817",
"merged_at": "2021-08-23T16:24:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2817.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2817"
} | After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names.
I'm doing the changes for the subsets that @richarddwang added:
- [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801
- [x] stack_exchange -> the_pile_stack_exchange https://github.com/huggingface/datasets/pull/2803
- [x] openwebtext2 -> the_pile_openwebtext2 https://github.com/huggingface/datasets/pull/2802
For consistency we should also rename `bookcorpusopen` to `the_pile_bookcorpus` IMO, but let me know what you think.
(we can just add a deprecation message to `bookcorpusopen` for now and add `the_pile_bookcorpus`) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2817/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2817/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6026/comments | https://api.github.com/repos/huggingface/datasets/issues/6026/events | https://github.com/huggingface/datasets/pull/6026 | 1,802,929,222 | PR_kwDODunzps5VanI8 | 6,026 | Fix style with ruff 0.0.278 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6026). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006444 / 0.011353 (-0.004909) | 0.003768 / 0.011008 (-0.007240) | 0.079625 / 0.038508 (0.041117) | 0.064490 / 0.023109 (0.041381) | 0.313858 / 0.275898 (0.037960) | 0.350810 / 0.323480 (0.027330) | 0.004804 / 0.007986 (-0.003182) | 0.002904 / 0.004328 (-0.001425) | 0.061728 / 0.004250 (0.057477) | 0.052265 / 0.037052 (0.015213) | 0.321246 / 0.258489 (0.062757) | 0.353873 / 0.293841 (0.060032) | 0.027510 / 0.128546 (-0.101036) | 0.007942 / 0.075646 (-0.067704) | 0.260518 / 0.419271 (-0.158754) | 0.045686 / 0.043533 (0.002153) | 0.316821 / 0.255139 (0.061682) | 0.337086 / 0.283200 (0.053886) | 0.022188 / 0.141683 (-0.119495) | 1.427345 / 1.452155 (-0.024810) | 1.476059 / 1.492716 (-0.016657) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189640 / 0.018006 (0.171634) | 0.429724 / 0.000490 (0.429235) | 0.005314 / 0.000200 (0.005114) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024412 / 0.037411 (-0.013000) | 0.073488 / 0.014526 (0.058962) | 0.083843 / 0.176557 (-0.092714) | 0.147849 / 0.737135 (-0.589286) | 0.085465 / 0.296338 (-0.210873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405314 / 0.215209 (0.190105) | 4.071471 / 2.077655 (1.993816) | 1.916252 / 1.504120 (0.412132) | 1.721616 / 1.541195 (0.180422) | 1.807187 / 1.468490 (0.338697) | 0.498045 / 4.584777 (-4.086732) | 3.057526 / 3.745712 (-0.688187) | 4.451424 / 5.269862 (-0.818437) | 2.764020 / 4.565676 (-1.801656) | 0.057665 / 0.424275 (-0.366610) | 0.006679 / 0.007607 (-0.000928) | 0.485733 / 0.226044 (0.259688) | 4.844367 / 2.268929 (2.575438) | 2.435359 / 55.444624 (-53.009265) | 2.111478 / 6.876477 (-4.764999) | 2.377448 / 2.142072 (0.235375) | 0.587997 / 4.805227 (-4.217230) | 0.125545 / 6.500664 (-6.375120) | 0.061509 / 0.075469 (-0.013960) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.229210 / 1.841788 (-0.612577) | 18.553994 / 8.074308 (10.479686) | 14.037877 / 10.191392 (3.846485) | 0.144230 / 0.680424 (-0.536194) | 0.016891 / 0.534201 (-0.517310) | 0.329039 / 0.579283 (-0.250244) | 0.357269 / 0.434364 (-0.077095) | 0.384222 / 0.540337 (-0.156115) | 0.521292 / 1.386936 (-0.865644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006359 / 0.011353 (-0.004994) | 0.003721 / 0.011008 (-0.007287) | 0.062047 / 0.038508 (0.023539) | 0.065267 / 0.023109 (0.042158) | 0.360164 / 0.275898 (0.084266) | 0.402292 / 0.323480 (0.078812) | 0.005603 / 0.007986 (-0.002382) | 0.002966 / 0.004328 (-0.001363) | 0.062580 / 0.004250 (0.058330) | 0.053634 / 0.037052 (0.016582) | 0.362210 / 0.258489 (0.103721) | 0.404285 / 0.293841 (0.110444) | 0.027567 / 0.128546 (-0.100979) | 0.008119 / 0.075646 (-0.067528) | 0.067577 / 0.419271 (-0.351694) | 0.042867 / 0.043533 (-0.000666) | 0.361576 / 0.255139 (0.106437) | 0.389061 / 0.283200 (0.105862) | 0.021923 / 0.141683 (-0.119760) | 1.446259 / 1.452155 (-0.005895) | 1.490724 / 1.492716 (-0.001992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206433 / 0.018006 (0.188427) | 0.424178 / 0.000490 (0.423688) | 0.002340 / 0.000200 (0.002140) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024955 / 0.037411 (-0.012456) | 0.077446 / 0.014526 (0.062920) | 0.088540 / 0.176557 (-0.088017) | 0.141225 / 0.737135 (-0.595910) | 0.089747 / 0.296338 (-0.206592) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443738 / 0.215209 (0.228529) | 4.208887 / 2.077655 (2.131233) | 2.155127 / 1.504120 (0.651007) | 2.028178 / 1.541195 (0.486983) | 2.084903 / 1.468490 (0.616413) | 0.497530 / 4.584777 (-4.087247) | 3.069012 / 3.745712 (-0.676700) | 3.025184 / 5.269862 (-2.244678) | 1.904687 / 4.565676 (-2.660990) | 0.057526 / 0.424275 (-0.366749) | 0.006482 / 0.007607 (-0.001125) | 0.494692 / 0.226044 (0.268647) | 4.944437 / 2.268929 (2.675508) | 2.655989 / 55.444624 (-52.788635) | 2.331677 / 6.876477 (-4.544800) | 2.382396 / 2.142072 (0.240324) | 0.582019 / 4.805227 (-4.223209) | 0.125866 / 6.500664 (-6.374799) | 0.062908 / 0.075469 (-0.012561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294612 / 1.841788 (-0.547176) | 19.016152 / 8.074308 (10.941844) | 14.088828 / 10.191392 (3.897436) | 0.160842 / 0.680424 (-0.519582) | 0.017054 / 0.534201 (-0.517146) | 0.333647 / 0.579283 (-0.245636) | 0.348094 / 0.434364 (-0.086270) | 0.394970 / 0.540337 (-0.145367) | 0.551141 / 1.386936 (-0.835795) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9e9cfe886792b30b5000808072a0f91ec8536749 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007442 / 0.011353 (-0.003911) | 0.004302 / 0.011008 (-0.006707) | 0.087159 / 0.038508 (0.048651) | 0.095094 / 0.023109 (0.071985) | 0.315422 / 0.275898 (0.039524) | 0.346672 / 0.323480 (0.023192) | 0.005811 / 0.007986 (-0.002174) | 0.003597 / 0.004328 (-0.000731) | 0.066400 / 0.004250 (0.062150) | 0.065947 / 0.037052 (0.028894) | 0.323269 / 0.258489 (0.064780) | 0.353309 / 0.293841 (0.059468) | 0.032268 / 0.128546 (-0.096278) | 0.008696 / 0.075646 (-0.066950) | 0.291486 / 0.419271 (-0.127786) | 0.054609 / 0.043533 (0.011076) | 0.321061 / 0.255139 (0.065922) | 0.336907 / 0.283200 (0.053707) | 0.027338 / 0.141683 (-0.114345) | 1.496442 / 1.452155 (0.044287) | 1.576946 / 1.492716 (0.084229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229140 / 0.018006 (0.211134) | 0.487500 / 0.000490 (0.487010) | 0.002425 / 0.000200 (0.002225) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029351 / 0.037411 (-0.008060) | 0.089610 / 0.014526 (0.075084) | 0.097880 / 0.176557 (-0.078676) | 0.155947 / 0.737135 (-0.581189) | 0.098593 / 0.296338 (-0.197745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382911 / 0.215209 (0.167702) | 3.820363 / 2.077655 (1.742708) | 1.866385 / 1.504120 (0.362265) | 1.712910 / 1.541195 (0.171716) | 1.813863 / 1.468490 (0.345373) | 0.484884 / 4.584777 (-4.099893) | 3.678911 / 3.745712 (-0.066801) | 5.249908 / 5.269862 (-0.019953) | 3.099614 / 4.565676 (-1.466063) | 0.057449 / 0.424275 (-0.366826) | 0.007728 / 0.007607 (0.000120) | 0.462123 / 0.226044 (0.236078) | 4.603942 / 2.268929 (2.335014) | 2.380957 / 55.444624 (-53.063668) | 2.059621 / 6.876477 (-4.816856) | 2.293764 / 2.142072 (0.151691) | 0.636471 / 4.805227 (-4.168756) | 0.150112 / 6.500664 (-6.350552) | 0.063705 / 0.075469 (-0.011764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.358099 / 1.841788 (-0.483689) | 20.193750 / 8.074308 (12.119442) | 14.297350 / 10.191392 (4.105958) | 0.164477 / 0.680424 (-0.515947) | 0.018259 / 0.534201 (-0.515942) | 0.399010 / 0.579283 (-0.180273) | 0.417306 / 0.434364 (-0.017058) | 0.456961 / 0.540337 (-0.083377) | 0.631068 / 1.386936 (-0.755868) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007324 / 0.011353 (-0.004028) | 0.004463 / 0.011008 (-0.006545) | 0.066148 / 0.038508 (0.027640) | 0.093909 / 0.023109 (0.070799) | 0.399122 / 0.275898 (0.123224) | 0.430226 / 0.323480 (0.106746) | 0.005505 / 0.007986 (-0.002481) | 0.003579 / 0.004328 (-0.000749) | 0.066529 / 0.004250 (0.062278) | 0.063471 / 0.037052 (0.026418) | 0.406351 / 0.258489 (0.147862) | 0.439987 / 0.293841 (0.146146) | 0.032640 / 0.128546 (-0.095906) | 0.008770 / 0.075646 (-0.066877) | 0.072592 / 0.419271 (-0.346680) | 0.050429 / 0.043533 (0.006896) | 0.390873 / 0.255139 (0.135734) | 0.412438 / 0.283200 (0.129239) | 0.027113 / 0.141683 (-0.114570) | 1.458281 / 1.452155 (0.006126) | 1.536819 / 1.492716 (0.044103) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228309 / 0.018006 (0.210303) | 0.454042 / 0.000490 (0.453552) | 0.000387 / 0.000200 (0.000187) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029573 / 0.037411 (-0.007838) | 0.086433 / 0.014526 (0.071907) | 0.097992 / 0.176557 (-0.078565) | 0.152464 / 0.737135 (-0.584671) | 0.099901 / 0.296338 (-0.196437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413807 / 0.215209 (0.198598) | 4.126395 / 2.077655 (2.048740) | 2.113544 / 1.504120 (0.609424) | 1.967829 / 1.541195 (0.426635) | 2.037123 / 1.468490 (0.568633) | 0.489403 / 4.584777 (-4.095374) | 3.689508 / 3.745712 (-0.056204) | 3.503909 / 5.269862 (-1.765952) | 2.113812 / 4.565676 (-2.451864) | 0.057988 / 0.424275 (-0.366287) | 0.007336 / 0.007607 (-0.000271) | 0.490840 / 0.226044 (0.264795) | 4.885040 / 2.268929 (2.616112) | 2.627864 / 55.444624 (-52.816760) | 2.231467 / 6.876477 (-4.645010) | 2.251307 / 2.142072 (0.109235) | 0.577370 / 4.805227 (-4.227857) | 0.131770 / 6.500664 (-6.368895) | 0.061313 / 0.075469 (-0.014156) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362052 / 1.841788 (-0.479735) | 21.332694 / 8.074308 (13.258386) | 15.562019 / 10.191392 (5.370627) | 0.170874 / 0.680424 (-0.509550) | 0.019226 / 0.534201 (-0.514975) | 0.400311 / 0.579283 (-0.178972) | 0.423060 / 0.434364 (-0.011304) | 0.469946 / 0.540337 (-0.070391) | 0.647745 / 1.386936 (-0.739191) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aec567c2f224f192e6e1f9799e3afc755eb517b2 \"CML watermark\")\n"
] | "2023-07-13T12:34:24Z" | "2023-07-13T12:46:26Z" | "2023-07-13T12:37:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6026.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6026",
"merged_at": "2023-07-13T12:37:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6026.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6026"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6026/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6026/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5364/comments | https://api.github.com/repos/huggingface/datasets/issues/5364/events | https://github.com/huggingface/datasets/pull/5364 | 1,498,360,628 | PR_kwDODunzps5Fiss1 | 5,364 | Support for writing arrow files directly with BeamWriter | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5364). All of your documentation changes will be reflected on that endpoint.",
"Deleting `BeamPipeline` and `upload_local_to_remote` would break the existing Beam scripts, so I reverted this change.\r\n\r\nFrom what I understand, we need these components in our scripts for the pattern:\r\n```python\r\nif not pipeline.is_local():\r\n dl_manager.ship_files_with_pipeline()\r\n```\r\n\r\nI plan to address this in a subsequent PR by (implicitly) downloading the files directly to the remote storage of the non-local runners.",
"I got `AttributeError: 'Pipeline' object has no attribute 'is_local'` when running\r\n```python\r\nload_dataset(\"wikipedia\", language=\"af\", date=\"20230101\", beam_runner=\"DirectRunner\")\r\n```\r\n```python\r\n~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline)\r\n 965 # Use dictionary since testing mock always returns the same result.\r\n 966 downloaded_files = dl_manager.download({\"xml\": xml_urls})\r\n--> 967 if not pipeline.is_local():\r\n 968 downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)\r\n 969 \r\n\r\nAttributeError: 'Pipeline' object has no attribute 'is_local'\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010649 / 0.011353 (-0.000704) | 0.006116 / 0.011008 (-0.004892) | 0.115568 / 0.038508 (0.077060) | 0.041704 / 0.023109 (0.018595) | 0.360459 / 0.275898 (0.084561) | 0.425679 / 0.323480 (0.102200) | 0.008992 / 0.007986 (0.001006) | 0.006321 / 0.004328 (0.001993) | 0.090223 / 0.004250 (0.085973) | 0.049877 / 0.037052 (0.012824) | 0.382447 / 0.258489 (0.123958) | 0.406567 / 0.293841 (0.112726) | 0.045138 / 0.128546 (-0.083409) | 0.014203 / 0.075646 (-0.061444) | 0.388897 / 0.419271 (-0.030375) | 0.057176 / 0.043533 (0.013644) | 0.358729 / 0.255139 (0.103590) | 0.386086 / 0.283200 (0.102887) | 0.119221 / 0.141683 (-0.022462) | 1.731574 / 1.452155 (0.279419) | 1.744103 / 1.492716 (0.251386) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230380 / 0.018006 (0.212373) | 0.493690 / 0.000490 (0.493201) | 0.005150 / 0.000200 (0.004950) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030771 / 0.037411 (-0.006641) | 0.123196 / 0.014526 (0.108671) | 0.134097 / 0.176557 (-0.042459) | 0.190442 / 0.737135 (-0.546693) | 0.138416 / 0.296338 (-0.157923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469763 / 0.215209 (0.254554) | 4.682847 / 2.077655 (2.605192) | 2.076717 / 1.504120 (0.572597) | 1.843721 / 1.541195 (0.302527) | 1.923486 / 1.468490 (0.454996) | 0.817680 / 4.584777 (-3.767097) | 4.482409 / 3.745712 (0.736697) | 3.898695 / 5.269862 (-1.371167) | 2.078291 / 4.565676 (-2.487386) | 0.100285 / 0.424275 (-0.323990) | 0.014761 / 0.007607 (0.007154) | 0.611261 / 0.226044 (0.385217) | 5.926919 / 2.268929 (3.657990) | 2.685080 / 55.444624 (-52.759544) | 2.232179 / 6.876477 (-4.644298) | 2.305576 / 2.142072 (0.163504) | 0.993729 / 4.805227 (-3.811498) | 0.194491 / 6.500664 (-6.306173) | 0.074176 / 0.075469 (-0.001293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.388592 / 1.841788 (-0.453196) | 17.146945 / 8.074308 (9.072636) | 15.989570 / 10.191392 (5.798178) | 0.200147 / 0.680424 (-0.480277) | 0.034009 / 0.534201 (-0.500192) | 0.517531 / 0.579283 (-0.061753) | 0.533966 / 0.434364 (0.099602) | 0.637024 / 0.540337 (0.096687) | 0.749166 / 1.386936 (-0.637770) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008240 / 0.011353 (-0.003113) | 0.006139 / 0.011008 (-0.004869) | 0.112258 / 0.038508 (0.073750) | 0.039001 / 0.023109 (0.015891) | 0.449467 / 0.275898 (0.173569) | 0.483422 / 0.323480 (0.159942) | 0.006176 / 0.007986 (-0.001810) | 0.006340 / 0.004328 (0.002012) | 0.083105 / 0.004250 (0.078855) | 0.047002 / 0.037052 (0.009950) | 0.458564 / 0.258489 (0.200075) | 0.513704 / 0.293841 (0.219863) | 0.041359 / 0.128546 (-0.087188) | 0.014515 / 0.075646 (-0.061131) | 0.392599 / 0.419271 (-0.026673) | 0.055222 / 0.043533 (0.011690) | 0.446956 / 0.255139 (0.191817) | 0.469194 / 0.283200 (0.185994) | 0.118212 / 0.141683 (-0.023471) | 1.682647 / 1.452155 (0.230492) | 1.780076 / 1.492716 (0.287360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259124 / 0.018006 (0.241117) | 0.507559 / 0.000490 (0.507069) | 0.001080 / 0.000200 (0.000880) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031969 / 0.037411 (-0.005442) | 0.126997 / 0.014526 (0.112471) | 0.139593 / 0.176557 (-0.036963) | 0.182735 / 0.737135 (-0.554400) | 0.145871 / 0.296338 (-0.150468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.530894 / 0.215209 (0.315685) | 5.284979 / 2.077655 (3.207324) | 2.592886 / 1.504120 (1.088766) | 2.407202 / 1.541195 (0.866007) | 2.434079 / 1.468490 (0.965589) | 0.829382 / 4.584777 (-3.755395) | 4.481710 / 3.745712 (0.735998) | 3.912280 / 5.269862 (-1.357581) | 1.962291 / 4.565676 (-2.603386) | 0.101840 / 0.424275 (-0.322435) | 0.014528 / 0.007607 (0.006921) | 0.639956 / 0.226044 (0.413911) | 6.414685 / 2.268929 (4.145756) | 3.240290 / 55.444624 (-52.204334) | 2.795208 / 6.876477 (-4.081269) | 2.912122 / 2.142072 (0.770050) | 0.992188 / 4.805227 (-3.813039) | 0.200701 / 6.500664 (-6.299964) | 0.074235 / 0.075469 (-0.001234) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455075 / 1.841788 (-0.386712) | 17.186669 / 8.074308 (9.112361) | 15.404357 / 10.191392 (5.212965) | 0.168267 / 0.680424 (-0.512157) | 0.020774 / 0.534201 (-0.513427) | 0.502603 / 0.579283 (-0.076680) | 0.506500 / 0.434364 (0.072136) | 0.624245 / 0.540337 (0.083907) | 0.735529 / 1.386936 (-0.651407) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n"
] | "2022-12-15T12:38:05Z" | "2023-01-25T15:49:25Z" | null | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5364",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5364"
} | Make it possible to write Arrow files directly with `BeamWriter` rather than converting from Parquet to Arrow, which is sub-optimal, especially for big datasets for which Beam is primarily used. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5364/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3188/comments | https://api.github.com/repos/huggingface/datasets/issues/3188/events | https://github.com/huggingface/datasets/issues/3188 | 1,040,980,712 | I_kwDODunzps4-DBro | 3,188 | conll2002 issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\n\r\nThis is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.\r\n",
"Ah, hadn't seen that sorry.\r\n\r\nThe scrambled \"point of contact\" is a separate issue though, I think.",
"@lhoestq The \"point of contact\" is still an issue.",
"It will be fixed in https://github.com/huggingface/datasets/pull/3274, thanks"
] | "2021-11-01T09:49:24Z" | "2021-11-15T13:50:59Z" | "2021-11-12T17:18:11Z" | CONTRIBUTOR | null | null | null | **Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet
```
In addition, the "point of contact" has encoding issues and does not work when clicked.
Am I the one who added this dataset ? No, @lhoestq did | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3188/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6059/comments | https://api.github.com/repos/huggingface/datasets/issues/6059/events | https://github.com/huggingface/datasets/issues/6059 | 1,816,537,176 | I_kwDODunzps5sRihY | 6,059 | Provide ability to load label mappings from file | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | "2023-07-22T02:04:19Z" | "2023-07-22T02:04:19Z" | null | NONE | null | null | null | ### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file.
It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo`
I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred.
The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it.
```
class TestDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["label_1", "label_2"]),
}
),
task_templates=[TextClassification(text_column="text", label_column="label")],
)
def _split_generators(self, dl_manager):
train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
]
def _generate_examples(self, filepath):
"""Generate AG News examples."""
with open(filepath, encoding="utf-8") as csv_file:
csv_reader = csv.DictReader(csv_file)
for id_, row in enumerate(csv_reader):
yield id_, row
```
### Motivation
Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset.
### Your contribution
I'm willing to work on a PR with guidence. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6059/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3966/comments | https://api.github.com/repos/huggingface/datasets/issues/3966/events | https://github.com/huggingface/datasets/pull/3966 | 1,173,883,084 | PR_kwDODunzps40rBNE | 3,966 | Create metric card for BERTScore | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-18T18:21:56Z" | "2022-03-22T13:35:28Z" | "2022-03-22T13:30:56Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3966",
"merged_at": "2022-03-22T13:30:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3966"
} | Proposing a metric card for BERTScore | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3966/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3966/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4432/comments | https://api.github.com/repos/huggingface/datasets/issues/4432/events | https://github.com/huggingface/datasets/pull/4432 | 1,255,523,720 | PR_kwDODunzps441JmK | 4,432 | Fix builder docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-01T09:45:30Z" | "2022-06-02T17:43:47Z" | "2022-06-02T17:35:15Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4432",
"merged_at": "2022-06-02T17:35:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4432"
} | Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4432/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2036/comments | https://api.github.com/repos/huggingface/datasets/issues/2036/events | https://github.com/huggingface/datasets/issues/2036 | 829,909,258 | MDU6SXNzdWU4Mjk5MDkyNTg= | 2,036 | Cannot load wikitext | {
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gpwner",
"id": 19349207,
"login": "Gpwner",
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gpwner"
} | [] | closed | false | null | [] | null | [
"Solved!"
] | "2021-03-12T09:09:39Z" | "2021-03-15T08:45:02Z" | "2021-03-15T08:44:44Z" | NONE | null | null | null | when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2036/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3876/comments | https://api.github.com/repos/huggingface/datasets/issues/3876/events | https://github.com/huggingface/datasets/pull/3876 | 1,164,045,075 | PR_kwDODunzps40LYC8 | 3,876 | Fix download_mode in dataset_module_factory | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3876). All of your documentation changes will be reflected on that endpoint."
] | "2022-03-09T14:54:33Z" | "2022-03-10T08:47:00Z" | "2022-03-10T08:46:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3876.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3876",
"merged_at": "2022-03-10T08:46:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3876.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3876"
} | Fix `download_mode` value set in `dataset_module_factory`.
Before the fix, it was set to `bool` (default to `False`).
Also set properly its default value in all public functions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3876/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3876/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4098/comments | https://api.github.com/repos/huggingface/datasets/issues/4098/events | https://github.com/huggingface/datasets/pull/4098 | 1,193,245,522 | PR_kwDODunzps41qXjo | 4,098 | Proposing WikiSplit metric card | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"A quick Github tip ;) To avoid running N times the CI, you can push all the changes at once: go to Files Changed tab, and on each suggestion there's a \"add to commit batch\" and then you can do one commit for all the suggestions you want to approve ;)",
"Oh thanks for the tip!! Haha I was wondering why it was running a bunch of\ntimes :P\n\nOn Tue, Apr 5, 2022 at 11:44 AM Quentin Lhoest ***@***.***>\nwrote:\n\n> A quick Github tip ;) To avoid running N times the CI, you can push all\n> the changes at once: go to Files Changed tab, and on each suggestion\n> there's a \"add to commit batch\" and then you can do one commit for all the\n> suggestions you want to approve ;)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/4098#issuecomment-1088894515>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ADMMIIRZYNVFJRWRWW4VJY3VDRNUBANCNFSM5SS7L5HA>\n> .\n> You are receiving this because you modified the open/close state.Message\n> ID: ***@***.***>\n>\n\n\n-- \nSasha Luccioni, PhD\nPostdoctoral Researcher (Mila, Université de Montréal)\nChercheure postdoctorale (Mila, Université de Montréal)\nhttps://www.sashaluccioni.com/\n [image: Image result for universite de montreal logo]\n"
] | "2022-04-05T14:36:34Z" | "2022-10-11T09:10:21Z" | "2022-04-05T15:42:28Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4098.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4098",
"merged_at": "2022-04-05T15:42:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4098.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4098"
} | Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4098/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4098/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6236/comments | https://api.github.com/repos/huggingface/datasets/issues/6236/events | https://github.com/huggingface/datasets/issues/6236 | 1,893,648,480 | I_kwDODunzps5w3shg | 6,236 | Support buffer shuffle for to_tf_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4",
"events_url": "https://api.github.com/users/EthanRock/events{/privacy}",
"followers_url": "https://api.github.com/users/EthanRock/followers",
"following_url": "https://api.github.com/users/EthanRock/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanRock/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EthanRock",
"id": 7635551,
"login": "EthanRock",
"node_id": "MDQ6VXNlcjc2MzU1NTE=",
"organizations_url": "https://api.github.com/users/EthanRock/orgs",
"received_events_url": "https://api.github.com/users/EthanRock/received_events",
"repos_url": "https://api.github.com/users/EthanRock/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EthanRock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanRock/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EthanRock"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"cc @Rocketknight1 ",
"Hey! You can implement this yourself, just:\r\n\r\n1) Create the dataset with `to_tf_dataset()` with `shuffle=False`\r\n2) Add an `unbatch()` at the end (or use batch_size=1)\r\n3) Add a `shuffle()` to the resulting dataset with your desired buffer size\r\n4) Add a `batch()` at the end again to re-batch your dataset.\r\n\r\nNote that the way we construct datasets in `to_tf_dataset()`, we don't actually shuffle the entire dataset in-memory, using `tf.data.Dataset.shuffle()`! Instead, we shuffle an index array and then load from the dataset with that. This means that shuffling with `tf.data.Dataset.shuffle()` will probably be slower and use more memory than our approach - I don't think adding the option for smaller shuffle buffers will actually save you memory on this!",
"Thanks for your reply! @Rocketknight1 \r\n\"We don't actually shuffle the entire dataset in-memory, using tf.data.Dataset.shuffle()! Instead, we shuffle an index array and then load from the dataset with that.\"\r\nIn such case, there will be random access to dataset data during shuffling. When the dataset is large, the performance can be X10 times slow. I have tried many ways with to_tf_dataset() trying to achieve comparable performance with tf.data.Dataset().shuffle(buffer_size).batch(). But the performance with to_tf_dataset() is still slow. \r\n"
] | "2023-09-13T03:19:44Z" | "2023-09-18T01:11:21Z" | null | NONE | null | null | null | ### Feature request
I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model.
Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset.
tf.data.Dataset support buffer shuffle by default.
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
### Motivation
I'm very frustrated to find the loading with shuffling large dataset is very slow. It seems impossible to shuffle before training Keras with big dataset.
### Your contribution
NA | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6236/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2188/comments | https://api.github.com/repos/huggingface/datasets/issues/2188/events | https://github.com/huggingface/datasets/issues/2188 | 853,044,166 | MDU6SXNzdWU4NTMwNDQxNjY= | 2,188 | Duplicate data in Timit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"events_url": "https://api.github.com/users/thanh-p/events{/privacy}",
"followers_url": "https://api.github.com/users/thanh-p/followers",
"following_url": "https://api.github.com/users/thanh-p/following{/other_user}",
"gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thanh-p",
"id": 78190188,
"login": "thanh-p",
"node_id": "MDQ6VXNlcjc4MTkwMTg4",
"organizations_url": "https://api.github.com/users/thanh-p/orgs",
"received_events_url": "https://api.github.com/users/thanh-p/received_events",
"repos_url": "https://api.github.com/users/thanh-p/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thanh-p"
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n"
] | "2021-04-08T04:21:54Z" | "2021-04-08T12:13:19Z" | "2021-04-08T12:13:19Z" | NONE | null | null | null | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
...
...
Would such an act of refusal be useful? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2188/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3622/comments | https://api.github.com/repos/huggingface/datasets/issues/3622/events | https://github.com/huggingface/datasets/issues/3622 | 1,112,831,661 | I_kwDODunzps5CVHat | 3,622 | Extend support for streaming datasets that use os.path.relpath | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2022-01-24T15:58:23Z" | "2022-02-04T14:03:54Z" | "2022-02-04T14:03:54Z" | MEMBER | null | null | null | Extend support for streaming datasets that use `os.path.relpath`.
This feature will also be useful to yield the relative path of audio or image files.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3622/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4423/comments | https://api.github.com/repos/huggingface/datasets/issues/4423/events | https://github.com/huggingface/datasets/pull/4423 | 1,253,326,023 | PR_kwDODunzps44trdP | 4,423 | Add new dataset MMChat | {
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/silverriver",
"id": 2529049,
"login": "silverriver",
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"repos_url": "https://api.github.com/users/silverriver/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"type": "User",
"url": "https://api.github.com/users/silverriver"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! As for https://github.com/huggingface/datasets/pull/4431 please also update the licensing section in https://huggingface.co/datasets/silver/mmchat ;)\r\n\r\nThen if it's fine for you feel free to close this PR"
] | "2022-05-31T04:45:07Z" | "2022-06-11T12:40:52Z" | "2022-06-11T12:31:42Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4423.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4423",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4423.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4423"
} | Hi, I am adding a new dataset MMChat.
It seems that all tests are passed | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4423/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4423/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2064/comments | https://api.github.com/repos/huggingface/datasets/issues/2064/events | https://github.com/huggingface/datasets/pull/2064 | 833,002,360 | MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1 | 2,064 | Fix ted_talks_iwslt version error | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | "2021-03-16T16:43:45Z" | "2021-03-16T18:00:08Z" | "2021-03-16T18:00:08Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2064",
"merged_at": "2021-03-16T18:00:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2064"
} | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2064/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1417/comments | https://api.github.com/repos/huggingface/datasets/issues/1417/events | https://github.com/huggingface/datasets/pull/1417 | 760,660,918 | MDExOlB1bGxSZXF1ZXN0NTM1NDU1NzM3 | 1,417 | WIP: Vinay/add peer read dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/34424769?v=4",
"events_url": "https://api.github.com/users/vinaykudari/events{/privacy}",
"followers_url": "https://api.github.com/users/vinaykudari/followers",
"following_url": "https://api.github.com/users/vinaykudari/following{/other_user}",
"gists_url": "https://api.github.com/users/vinaykudari/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vinaykudari",
"id": 34424769,
"login": "vinaykudari",
"node_id": "MDQ6VXNlcjM0NDI0NzY5",
"organizations_url": "https://api.github.com/users/vinaykudari/orgs",
"received_events_url": "https://api.github.com/users/vinaykudari/received_events",
"repos_url": "https://api.github.com/users/vinaykudari/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vinaykudari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinaykudari/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vinaykudari"
} | [] | closed | false | null | [] | null | [] | "2020-12-09T20:49:52Z" | "2020-12-11T18:43:31Z" | "2020-12-11T18:43:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1417.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1417",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1417.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1417"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1417/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1417/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/5883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5883/comments | https://api.github.com/repos/huggingface/datasets/issues/5883/events | https://github.com/huggingface/datasets/pull/5883 | 1,719,527,597 | PR_kwDODunzps5RAkYi | 5,883 | Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n\r\nColab Gist at https://gist.github.com/alvarobartt/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nAlso, here's a quick sample of what's happening:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"imdb\", split=\"train\")\r\ntfds = ds.to_tf_dataset(batch_size=16)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nA more detailed version of it:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"a\": [1],\r\n \"b\": [\"é\"],\r\n }\r\n)\r\ntfds = ds.to_tf_dataset(batch_size=1)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nThe original issue comes from https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#LL234C4-L234C4, which could easily be solved by replacing that line with `return result.astype(np.unicode_)` but they are mentioning that it may lead to issues.\r\n\r\nEven the following fails in `numpy`:\r\n\r\n```python\r\nimport numpy as np\r\n\r\nx = np.array([\"é\"]).astype(np.bytes_)\r\n```",
"cc. @lhoestq :hugs:",
"cc @Rocketknight1 ",
"> Nice ! Could you add some tests to make sure that batch_size=None works as expected ?\r\n\r\nSure, I'll add the tests for everything, including the string-encoding issue to make sure it's solved!",
"Thanks for the review @lhoestq and @Rocketknight1! I do understand that processing it in batches is always more efficient than processing it one-by-one, it was just to make `batch_size` optional. What we can do is default it to a certain batch size e.g. 16 as before, and that's it, but I think it can still remain optional.",
"@Rocketknight1 then I'll add the integration tests for the optional `batch_size` as well as for the encoding of non-ASCII compatible characters 😄 Do we set the default `batch_size` to 16 instead of `None`?",
"@alvarobartt I think 16 is a reasonable default, yep!",
"I think default should be None, not 16.\r\nUsers won't expect to have it batched by default.",
"Then I'll leave it as is, and add the unit/integration tests, thanks @Rocketknight1 and @lhoestq ",
"Hi @Rocketknight1 @lhoestq! So the string-encoding issue is already solved, but I've got one doubt about the `batch_size` being optional in the multiprocessing approach, since in that case I assume the `batch_size` should be mandatory, for the moment I'm assuming it is/should be mandatory, but let me know if you want me to add a check to disallow `batch_size=None` when `num_workers>1`. Thanks!",
"> To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n> \r\n> Colab Gist at https://gist.github.com/alvarobartt/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nI've used the Colab shared above for testing purposes, and it works fine, plus the unit/integration tests are passing. I've also trained a `KerasNLP` model with incoming data from 🤗`datasets` with no issue at all!",
"> in the multiprocessing approach, since in that case I assume the batch_size should be mandatory,\r\n\r\nNo I think they're quite orthogonal, no need to have it mandatory",
"> No I think they're quite orthogonal, no need to have it mandatory\r\n\r\nBut it will break if `batch_size=None` as the multiprocessing approach will aim to prepare batches and distribute those to every worker, and assuming `batch_size=1` when `batch_size=None` I guess is not a good assumption, right?",
"Ah I see. Multiprocessing should support batch_size=None indeed. If you have ideas you can do it in this PR, or raise a NotImplementedError and we can see later",
"Sure @lhoestq, I can add a `NotImplementedError` for the moment, and prepare the next PR straight-away to tackle the multiprocessing approach with `batch_size=None`, but not sure if that may eventually collide with @Rocketknight1 PR at https://github.com/huggingface/datasets/pull/5863",
"Yes, let me merge the PR at #5863 after this one, and then we can open another to improve the behaviour with multiprocessing and `batch_size=None`!",
"Sure @Rocketknight1 makes complete sense to me! Do you want me to add the `raise NotImplementedError` and then we merge this PR? Or you prefer to directly merge the current?",
"`raise NotImplementedError` for now with an error telling the user that multiprocessing needs them to specify a batch size, I think!",
"Since you recently approved @Rocketknight1, are we ready to merge? Thanks 🤗",
"Ah actually it looks like `minimal_tf_collate_fn` doesn't support batch_size=None",
"Hi @lhoestq so I didn't include the call to `collate_fn`, as we won't need to collate the incoming data e.g. \"str\" should remain a \"str\" not a [\"str\"], and the `minimal_collate_fn` was indeed putting everything into a list, so the output was not un-batched, but batched with size 1",
"What if the user passes a collate_fn ? The torch DataLoader still applies it if batch_size=None for example.\r\n\r\nDoes my last change look of to you ? If so I think we can merge",
"> What if the user passes a collate_fn ? The torch DataLoader still applies it if batch_size=None for example.\r\n> \r\n> Does my last change look of to you ? If so I think we can merge\r\n\r\nI think we're good, since it won't batch it under the scenario of `str` being provided instead of `List[str]`, and the unit/integration tests are passing, so I'm OK to merge. Maybe we can double check with Matt? cc @Rocketknight1 ",
"Yes, and sorry for the delay! I'm happy to merge.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006555 / 0.011353 (-0.004798) | 0.004521 / 0.011008 (-0.006487) | 0.096633 / 0.038508 (0.058125) | 0.032859 / 0.023109 (0.009750) | 0.294632 / 0.275898 (0.018734) | 0.325140 / 0.323480 (0.001660) | 0.005676 / 0.007986 (-0.002310) | 0.005252 / 0.004328 (0.000924) | 0.074349 / 0.004250 (0.070099) | 0.045836 / 0.037052 (0.008784) | 0.302919 / 0.258489 (0.044430) | 0.340686 / 0.293841 (0.046845) | 0.028398 / 0.128546 (-0.100148) | 0.008942 / 0.075646 (-0.066704) | 0.326994 / 0.419271 (-0.092278) | 0.049556 / 0.043533 (0.006023) | 0.293883 / 0.255139 (0.038744) | 0.316522 / 0.283200 (0.033322) | 0.097385 / 0.141683 (-0.044298) | 1.405334 / 1.452155 (-0.046821) | 1.521529 / 1.492716 (0.028812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212269 / 0.018006 (0.194263) | 0.445692 / 0.000490 (0.445203) | 0.004930 / 0.000200 (0.004730) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026907 / 0.037411 (-0.010504) | 0.108607 / 0.014526 (0.094081) | 0.116806 / 0.176557 (-0.059751) | 0.178428 / 0.737135 (-0.558707) | 0.122326 / 0.296338 (-0.174012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404211 / 0.215209 (0.189002) | 4.045374 / 2.077655 (1.967719) | 1.877237 / 1.504120 (0.373117) | 1.706276 / 1.541195 (0.165081) | 1.750610 / 1.468490 (0.282120) | 0.522331 / 4.584777 (-4.062446) | 3.742286 / 3.745712 (-0.003426) | 1.791285 / 5.269862 (-3.478577) | 1.043872 / 4.565676 (-3.521805) | 0.065176 / 0.424275 (-0.359099) | 0.011821 / 0.007607 (0.004214) | 0.507374 / 0.226044 (0.281329) | 5.088803 / 2.268929 (2.819875) | 2.282742 / 55.444624 (-53.161882) | 1.950737 / 6.876477 (-4.925740) | 2.042262 / 2.142072 (-0.099810) | 0.636525 / 4.805227 (-4.168702) | 0.140837 / 6.500664 (-6.359827) | 0.063223 / 0.075469 (-0.012246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188070 / 1.841788 (-0.653718) | 14.622681 / 8.074308 (6.548372) | 13.247988 / 10.191392 (3.056596) | 0.165858 / 0.680424 (-0.514566) | 0.017476 / 0.534201 (-0.516725) | 0.391973 / 0.579283 (-0.187310) | 0.433326 / 0.434364 (-0.001038) | 0.467163 / 0.540337 (-0.073175) | 0.568359 / 1.386936 (-0.818577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006076 / 0.011353 (-0.005276) | 0.004439 / 0.011008 (-0.006570) | 0.074496 / 0.038508 (0.035988) | 0.031396 / 0.023109 (0.008287) | 0.372237 / 0.275898 (0.096339) | 0.403412 / 0.323480 (0.079932) | 0.005430 / 0.007986 (-0.002555) | 0.003846 / 0.004328 (-0.000483) | 0.074403 / 0.004250 (0.070153) | 0.045398 / 0.037052 (0.008346) | 0.394133 / 0.258489 (0.135644) | 0.421769 / 0.293841 (0.127928) | 0.027936 / 0.128546 (-0.100610) | 0.008962 / 0.075646 (-0.066685) | 0.083158 / 0.419271 (-0.336113) | 0.044863 / 0.043533 (0.001331) | 0.393834 / 0.255139 (0.138695) | 0.391537 / 0.283200 (0.108337) | 0.097971 / 0.141683 (-0.043712) | 1.496632 / 1.452155 (0.044477) | 1.585511 / 1.492716 (0.092795) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010094 / 0.018006 (-0.007913) | 0.437811 / 0.000490 (0.437321) | 0.000963 / 0.000200 (0.000763) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028864 / 0.037411 (-0.008547) | 0.112480 / 0.014526 (0.097954) | 0.120938 / 0.176557 (-0.055619) | 0.170888 / 0.737135 (-0.566247) | 0.125903 / 0.296338 (-0.170435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426716 / 0.215209 (0.211507) | 4.238380 / 2.077655 (2.160725) | 2.052889 / 1.504120 (0.548769) | 1.871043 / 1.541195 (0.329848) | 1.890405 / 1.468490 (0.421915) | 0.522059 / 4.584777 (-4.062718) | 3.813331 / 3.745712 (0.067619) | 2.891651 / 5.269862 (-2.378210) | 1.323836 / 4.565676 (-3.241841) | 0.065124 / 0.424275 (-0.359151) | 0.011498 / 0.007607 (0.003891) | 0.525102 / 0.226044 (0.299057) | 5.245190 / 2.268929 (2.976261) | 2.531149 / 55.444624 (-52.913476) | 2.197323 / 6.876477 (-4.679153) | 2.197314 / 2.142072 (0.055241) | 0.633423 / 4.805227 (-4.171804) | 0.140248 / 6.500664 (-6.360416) | 0.064432 / 0.075469 (-0.011037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270639 / 1.841788 (-0.571149) | 14.856678 / 8.074308 (6.782369) | 14.337631 / 10.191392 (4.146239) | 0.195319 / 0.680424 (-0.485105) | 0.017628 / 0.534201 (-0.516573) | 0.393984 / 0.579283 (-0.185299) | 0.421987 / 0.434364 (-0.012376) | 0.459245 / 0.540337 (-0.081092) | 0.557786 / 1.386936 (-0.829150) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a129219a48c1b07c06d4bc1db32c317bf513089d \"CML watermark\")\n",
"Will you eventually need help with your PR @Rocketknight1? I'll be happy to help if needed 😄 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007577 / 0.011353 (-0.003776) | 0.004960 / 0.011008 (-0.006048) | 0.113622 / 0.038508 (0.075114) | 0.037981 / 0.023109 (0.014872) | 0.355312 / 0.275898 (0.079414) | 0.393384 / 0.323480 (0.069904) | 0.006575 / 0.007986 (-0.001411) | 0.005941 / 0.004328 (0.001612) | 0.085976 / 0.004250 (0.081726) | 0.053784 / 0.037052 (0.016732) | 0.369358 / 0.258489 (0.110869) | 0.399402 / 0.293841 (0.105561) | 0.032155 / 0.128546 (-0.096391) | 0.010448 / 0.075646 (-0.065199) | 0.389009 / 0.419271 (-0.030263) | 0.057377 / 0.043533 (0.013844) | 0.354968 / 0.255139 (0.099829) | 0.382404 / 0.283200 (0.099204) | 0.111056 / 0.141683 (-0.030627) | 1.807986 / 1.452155 (0.355832) | 1.866070 / 1.492716 (0.373354) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244439 / 0.018006 (0.226432) | 0.491942 / 0.000490 (0.491452) | 0.001910 / 0.000200 (0.001710) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031024 / 0.037411 (-0.006387) | 0.129674 / 0.014526 (0.115148) | 0.142974 / 0.176557 (-0.033583) | 0.213568 / 0.737135 (-0.523568) | 0.147794 / 0.296338 (-0.148545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480333 / 0.215209 (0.265124) | 4.792901 / 2.077655 (2.715246) | 2.233145 / 1.504120 (0.729025) | 2.036291 / 1.541195 (0.495096) | 2.109631 / 1.468490 (0.641140) | 0.624546 / 4.584777 (-3.960231) | 4.543511 / 3.745712 (0.797799) | 3.961345 / 5.269862 (-1.308517) | 1.903634 / 4.565676 (-2.662042) | 0.076584 / 0.424275 (-0.347691) | 0.014590 / 0.007607 (0.006983) | 0.593195 / 0.226044 (0.367151) | 5.928740 / 2.268929 (3.659811) | 2.781164 / 55.444624 (-52.663460) | 2.364303 / 6.876477 (-4.512173) | 2.510139 / 2.142072 (0.368067) | 0.770886 / 4.805227 (-4.034341) | 0.167995 / 6.500664 (-6.332669) | 0.076622 / 0.075469 (0.001153) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.402398 / 1.841788 (-0.439390) | 17.921233 / 8.074308 (9.846925) | 17.036738 / 10.191392 (6.845346) | 0.168997 / 0.680424 (-0.511427) | 0.020259 / 0.534201 (-0.513941) | 0.465322 / 0.579283 (-0.113962) | 0.500435 / 0.434364 (0.066071) | 0.546846 / 0.540337 (0.006509) | 0.658130 / 1.386936 (-0.728806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007624 / 0.011353 (-0.003729) | 0.005265 / 0.011008 (-0.005744) | 0.086886 / 0.038508 (0.048377) | 0.038235 / 0.023109 (0.015126) | 0.463969 / 0.275898 (0.188071) | 0.502451 / 0.323480 (0.178971) | 0.006285 / 0.007986 (-0.001701) | 0.004525 / 0.004328 (0.000197) | 0.086557 / 0.004250 (0.082307) | 0.052414 / 0.037052 (0.015362) | 0.482167 / 0.258489 (0.223678) | 0.513684 / 0.293841 (0.219843) | 0.032929 / 0.128546 (-0.095618) | 0.010249 / 0.075646 (-0.065397) | 0.093377 / 0.419271 (-0.325895) | 0.054114 / 0.043533 (0.010582) | 0.466116 / 0.255139 (0.210977) | 0.488977 / 0.283200 (0.205777) | 0.115446 / 0.141683 (-0.026237) | 1.762912 / 1.452155 (0.310757) | 1.874191 / 1.492716 (0.381475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012666 / 0.018006 (-0.005341) | 0.485962 / 0.000490 (0.485473) | 0.002621 / 0.000200 (0.002421) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033661 / 0.037411 (-0.003751) | 0.135395 / 0.014526 (0.120869) | 0.147230 / 0.176557 (-0.029326) | 0.205847 / 0.737135 (-0.531288) | 0.151496 / 0.296338 (-0.144842) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514097 / 0.215209 (0.298887) | 5.134093 / 2.077655 (3.056438) | 2.496775 / 1.504120 (0.992655) | 2.268078 / 1.541195 (0.726883) | 2.342153 / 1.468490 (0.873663) | 0.623130 / 4.584777 (-3.961647) | 4.601787 / 3.745712 (0.856075) | 3.414249 / 5.269862 (-1.855613) | 1.849603 / 4.565676 (-2.716073) | 0.078350 / 0.424275 (-0.345925) | 0.013785 / 0.007607 (0.006178) | 0.638783 / 0.226044 (0.412739) | 6.378356 / 2.268929 (4.109427) | 3.072867 / 55.444624 (-52.371757) | 2.668123 / 6.876477 (-4.208354) | 2.693905 / 2.142072 (0.551833) | 0.764583 / 4.805227 (-4.040644) | 0.166854 / 6.500664 (-6.333810) | 0.076883 / 0.075469 (0.001414) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502003 / 1.841788 (-0.339784) | 18.674205 / 8.074308 (10.599897) | 16.837759 / 10.191392 (6.646367) | 0.176995 / 0.680424 (-0.503428) | 0.020126 / 0.534201 (-0.514075) | 0.464480 / 0.579283 (-0.114803) | 0.516477 / 0.434364 (0.082113) | 0.549818 / 0.540337 (0.009481) | 0.659927 / 1.386936 (-0.727009) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a129219a48c1b07c06d4bc1db32c317bf513089d \"CML watermark\")\n",
"@alvarobartt Yes, I'll ping you for a review once it's ready!"
] | "2023-05-22T11:51:07Z" | "2023-06-08T11:09:03Z" | "2023-06-06T16:49:15Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5883.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5883",
"merged_at": "2023-06-06T16:49:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5883.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5883"
} | ## What's in this PR?
This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset.
The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `numpy.arrays` when `dtype` is unicode/string, is to convert it into `numpy.bytes`, more information in the docstring of https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#L210. That's triggered when using `tensorflow.numpy_function` as it's applying another type cast besides the one that `datasets` does, so the casting is applied at least twice per entry/batch. So this means that the definition of the `numpy.unicode_` dtype when the data in the batch is a string, is ignored, and replaced by `numpy.bytes_`.
Besides that, some other minor things have been fixed:
* Made `batch_size` an optional parameter in `to_tf_dataset`
* Map the `tensorflow` output dtypes just once, and not in every `tf.function` call during `map`
* Keep `numpy` formatting in the `datasets.Dataset` if already formatted like it, no need to format it again as `numpy`
* Docstring indentation in `dataset_to_tf` and `multiprocess_dataset_to_tf`
## What's missing in this PR?
I can include some integration tests if needed, to validate that `batch_size` is optional, and that the tensors in the TF-Dataset can be looped over with no issues as before. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5883/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2462/comments | https://api.github.com/repos/huggingface/datasets/issues/2462/events | https://github.com/huggingface/datasets/issues/2462 | 915,384,613 | MDU6SXNzdWU5MTUzODQ2MTM= | 2,462 | Merge DatasetDict and Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | {
"closed_at": null,
"closed_issues": 2,
"created_at": "2021-07-21T15:34:56Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-30T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"id": 6968069,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"open_issues": 4,
"state": "open",
"title": "1.12",
"updated_at": "2021-10-13T10:26:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8"
} | [
"Any update on this? @lhoestq ",
"Unless there is high demande I don't think we will end up implementing this. This is a lot of work with very few advantages"
] | "2021-06-08T19:22:04Z" | "2023-08-16T09:34:34Z" | null | MEMBER | null | null | null | As discussed in #2424 and #2437 (please see there for detailed conversation):
- It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset.
- The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users.
- A user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.
Here is a proposal for discussion and refined (and potential abandon if it's not good enough):
- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other
- let's disallow the use of integers in split names (probably not a very big breaking change)
- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)
- when you index with strings/split name you have the same behavior as now (full backward compat)
- let's then also have all the methods of a Dataset on the DatasetDict
The end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both.
There are a few things that we could discuss if we want to merge Dataset and DatasetDict:
1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature
```
from datasets import load_dataset
dataset = load_dataset(...)
dataset["train"]
dataset["input_ids"]
```
2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.
Moreover regarding your points:
- integers are not allowed as split names already
- it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset
cc: @thomwolf @lhoestq | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2462/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5975/comments | https://api.github.com/repos/huggingface/datasets/issues/5975/events | https://github.com/huggingface/datasets/issues/5975 | 1,768,271,343 | I_kwDODunzps5pZa3v | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | {
"avatar_url": "https://avatars.githubusercontent.com/u/135350576?v=4",
"events_url": "https://api.github.com/users/Veluchs/events{/privacy}",
"followers_url": "https://api.github.com/users/Veluchs/followers",
"following_url": "https://api.github.com/users/Veluchs/following{/other_user}",
"gists_url": "https://api.github.com/users/Veluchs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Veluchs",
"id": 135350576,
"login": "Veluchs",
"node_id": "U_kgDOCBFJMA",
"organizations_url": "https://api.github.com/users/Veluchs/orgs",
"received_events_url": "https://api.github.com/users/Veluchs/received_events",
"repos_url": "https://api.github.com/users/Veluchs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Veluchs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Veluchs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Veluchs"
} | [] | closed | false | null | [] | null | [
"Duplicate of #",
"Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables",
"Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PROXY'] = \"http://example.com:xxxx\" \r\nos.environ['HTTPS_PROXY'] = \"http://example.com:xxxx\" \r\n`\r\n\r\nHowever, I still get the same error.\r\n\r\nOne thing that could be helpfull: When downloading a dataset without streaming i get the following message:\r\n_HF google storage unreachable. Downloading and preparing it from source_.\r\nThe download does however work as expected.\r\n",
"Are you able to use `aiohttp` to get the file at `https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json` using your proxy ?",
"It only works when passing trust_env=True when creating the ClientSession, as well as setting ssl=False.\r\n\r\nWorking Example:\r\n\r\n```\r\nimport os\r\n\r\nos.environ['HTTP_PROXY'] = \"xyz\"\r\nos.environ['HTTPS_PROXY'] = \"xyz\"\r\n\r\nimport asyncio\r\nimport aiohttp\r\n\r\nasync def download_pep(url):\r\n async with aiohttp.ClientSession(trust_env=True) as session:\r\n print(\"1\")\r\n async with session.get(url, ssl=False) as resp:\r\n print(\"2\")\r\n content = await resp.text()\r\n print(content)\r\n return content\r\n\r\nasyncio.run(download_pep(\"https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\"))\r\n```\r\n\r\n\r\n\r\nSSL Verification has been a problem with other packages as well. Usually I circumvent the problem by setting\r\n```\r\nimport ssl\r\nssl._create_default_https_context = ssl._create_unverified_context\r\n```\r\n(probably not the best idea for security), although here aiohttp does not seem to use this default context.",
"We do pass `trust_env` as well. Could you share the full stack trace you get when streaming using `datasets` ? That could help locate where we might have forgotten to pass `trust_env`",
"Is there a way to disable ssl verification when streaming a dataset. I suspect this might be the isssue with my proxy.\r\n\r\n\r\nHere you go:\r\n\r\n```\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[8], line 3\r\n 1 from datasets import load_dataset\r\n----> 3 ds = load_dataset(\"facebook/voxpopuli\", name=\"de\", streaming=True)\r\n 5 sample = next(iter(ds))\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281), in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1274 dl_manager = StreamingDownloadManager(\r\n 1275 base_path=base_path or self.base_path,\r\n 1276 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1277 dataset_name=self.name,\r\n 1278 data_dir=self.config.data_dir,\r\n 1279 )\r\n 1280 self._check_manual_download(dl_manager)\r\n-> 1281 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1282 # By default, return all splits\r\n 1283 if split is None:\r\n\r\nFile [~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120), in Voxpopuli._split_generators(self, dl_manager)\r\n 118 def _split_generators(self, dl_manager):\r\n 119 n_shards_path = dl_manager.download_and_extract(_N_SHARDS_FILE)\r\n--> 120 with open(n_shards_path) as f:\r\n 121 n_shards = json.load(f)\r\n 123 if self.config.name == \"en_accented\":\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71), in extend_module_for_streaming..wrap_auth..wrapper(*args, **kwargs)\r\n 69 @wraps(function)\r\n 70 def wrapper(*args, **kwargs):\r\n---> 71 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517), in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 515 except FileNotFoundError:\r\n 516 if file.startswith(config.HF_ENDPOINT):\r\n--> 517 raise FileNotFoundError(\r\n 518 file + \"\\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\"\r\n 519 ) from None\r\n 520 else:\r\n 521 raise\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```",
"> Is there a way to disable ssl verification when streaming a dataset.\r\n\r\nI don't think so.\r\n\r\nWe use `fsspec` HTTPFileSystem implementation that is based on `aiohttp`. If you register a subclass of HTTPFileSystem that has SSL disabled by default it could work, but I wouldn't recommended it because it can raise security issues.",
"Okay thanks for your help! I guess I have to figure out how to improve the proxy environment / see if I can make it work with ssl connections."
] | "2023-06-21T19:10:02Z" | "2023-06-30T05:55:39Z" | "2023-06-30T05:55:38Z" | NONE | null | null | null | ### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5975/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5975/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2685/comments | https://api.github.com/repos/huggingface/datasets/issues/2685/events | https://github.com/huggingface/datasets/pull/2685 | 948,791,572 | MDExOlB1bGxSZXF1ZXN0NjkzNTgxNTk2 | 2,685 | Fix Blog Authorship Corpus dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Normally, I'm expecting errors from the validation of the README file... 😅 ",
"That is:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[blog_authorship_corpus]\r\n==== 1 failed, 3182 passed, 2763 skipped, 16 warnings in 201.23s (0:03:21) =====\r\n```",
"@lhoestq, apart from the dataset card, everything is OK with this PR: I tested it locally."
] | "2021-07-20T15:44:50Z" | "2021-07-21T13:11:58Z" | "2021-07-21T13:11:58Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2685",
"merged_at": "2021-07-21T13:11:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2685"
} | This PR:
- Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError`
- Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files
Close #2679. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2685/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1103/comments | https://api.github.com/repos/huggingface/datasets/issues/1103/events | https://github.com/huggingface/datasets/issues/1103 | 757,016,820 | MDU6SXNzdWU3NTcwMTY4MjA= | 1,103 | Add support to download kaggle datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`?",
"The workflow for downloading a Kaggle dataset and turning it into an HF dataset is pretty simple:\r\n```python\r\n!kaggle datasets download -p path\r\nds = load_dataset(path)\r\n```\r\n\r\nNative support would make our download logic even more complex, and I don't think this is a good idea considering this particular feature is not requested often. \r\n\r\nPS: Kaggle should integrate their API with `fsspec` to allow us to use a common interface if they are interested in tighter integrations"
] | "2020-12-04T11:08:37Z" | "2023-07-20T15:22:24Z" | "2023-07-20T15:22:23Z" | MEMBER | null | null | null | We can use API key | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1103/timeline | null | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/5978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5978/comments | https://api.github.com/repos/huggingface/datasets/issues/5978/events | https://github.com/huggingface/datasets/pull/5978 | 1,770,187,053 | PR_kwDODunzps5Tru2_ | 5,978 | Release: 2.13.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006173 / 0.011353 (-0.005180) | 0.003773 / 0.011008 (-0.007235) | 0.099499 / 0.038508 (0.060991) | 0.037918 / 0.023109 (0.014809) | 0.321329 / 0.275898 (0.045431) | 0.379739 / 0.323480 (0.056259) | 0.004664 / 0.007986 (-0.003322) | 0.002943 / 0.004328 (-0.001385) | 0.077759 / 0.004250 (0.073509) | 0.055271 / 0.037052 (0.018219) | 0.329428 / 0.258489 (0.070939) | 0.378731 / 0.293841 (0.084890) | 0.027737 / 0.128546 (-0.100810) | 0.008566 / 0.075646 (-0.067081) | 0.313220 / 0.419271 (-0.106052) | 0.047101 / 0.043533 (0.003568) | 0.316211 / 0.255139 (0.061072) | 0.341826 / 0.283200 (0.058626) | 0.020838 / 0.141683 (-0.120845) | 1.550064 / 1.452155 (0.097909) | 1.706518 / 1.492716 (0.213801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203093 / 0.018006 (0.185087) | 0.425345 / 0.000490 (0.424856) | 0.004800 / 0.000200 (0.004600) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024590 / 0.037411 (-0.012821) | 0.098115 / 0.014526 (0.083589) | 0.108274 / 0.176557 (-0.068282) | 0.170804 / 0.737135 (-0.566332) | 0.110560 / 0.296338 (-0.185778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425251 / 0.215209 (0.210042) | 4.239075 / 2.077655 (2.161421) | 1.955601 / 1.504120 (0.451481) | 1.774796 / 1.541195 (0.233602) | 1.826641 / 1.468490 (0.358150) | 0.558777 / 4.584777 (-4.026000) | 3.361697 / 3.745712 (-0.384015) | 1.764468 / 5.269862 (-3.505394) | 1.032280 / 4.565676 (-3.533396) | 0.067872 / 0.424275 (-0.356403) | 0.010998 / 0.007607 (0.003391) | 0.525682 / 0.226044 (0.299637) | 5.254356 / 2.268929 (2.985427) | 2.384332 / 55.444624 (-53.060292) | 2.045578 / 6.876477 (-4.830898) | 2.170914 / 2.142072 (0.028841) | 0.674782 / 4.805227 (-4.130445) | 0.135351 / 6.500664 (-6.365314) | 0.066591 / 0.075469 (-0.008878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209181 / 1.841788 (-0.632606) | 14.044518 / 8.074308 (5.970210) | 13.184705 / 10.191392 (2.993313) | 0.130836 / 0.680424 (-0.549588) | 0.016582 / 0.534201 (-0.517619) | 0.360005 / 0.579283 (-0.219279) | 0.379519 / 0.434364 (-0.054845) | 0.422174 / 0.540337 (-0.118164) | 0.515546 / 1.386936 (-0.871390) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006293 / 0.011353 (-0.005060) | 0.003784 / 0.011008 (-0.007224) | 0.079248 / 0.038508 (0.040739) | 0.038452 / 0.023109 (0.015343) | 0.444727 / 0.275898 (0.168829) | 0.500535 / 0.323480 (0.177055) | 0.003455 / 0.007986 (-0.004531) | 0.002873 / 0.004328 (-0.001455) | 0.077439 / 0.004250 (0.073189) | 0.047855 / 0.037052 (0.010803) | 0.448049 / 0.258489 (0.189560) | 0.509517 / 0.293841 (0.215676) | 0.028359 / 0.128546 (-0.100188) | 0.008503 / 0.075646 (-0.067143) | 0.084961 / 0.419271 (-0.334310) | 0.042880 / 0.043533 (-0.000653) | 0.436628 / 0.255139 (0.181489) | 0.456574 / 0.283200 (0.173375) | 0.019539 / 0.141683 (-0.122144) | 1.561273 / 1.452155 (0.109118) | 1.572018 / 1.492716 (0.079301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230250 / 0.018006 (0.212244) | 0.415189 / 0.000490 (0.414700) | 0.003213 / 0.000200 (0.003013) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025541 / 0.037411 (-0.011871) | 0.102326 / 0.014526 (0.087800) | 0.110258 / 0.176557 (-0.066298) | 0.162488 / 0.737135 (-0.574647) | 0.112782 / 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457936 / 0.215209 (0.242727) | 4.581503 / 2.077655 (2.503848) | 2.237659 / 1.504120 (0.733540) | 2.029960 / 1.541195 (0.488765) | 2.082911 / 1.468490 (0.614421) | 0.556485 / 4.584777 (-4.028292) | 3.384418 / 3.745712 (-0.361295) | 1.748809 / 5.269862 (-3.521053) | 1.034759 / 4.565676 (-3.530917) | 0.067500 / 0.424275 (-0.356776) | 0.011425 / 0.007607 (0.003818) | 0.561340 / 0.226044 (0.335295) | 5.623629 / 2.268929 (3.354701) | 2.733587 / 55.444624 (-52.711038) | 2.401578 / 6.876477 (-4.474899) | 2.524569 / 2.142072 (0.382496) | 0.673170 / 4.805227 (-4.132057) | 0.136681 / 6.500664 (-6.363983) | 0.068060 / 0.075469 (-0.007409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318651 / 1.841788 (-0.523137) | 14.362123 / 8.074308 (6.287815) | 14.385964 / 10.191392 (4.194572) | 0.149914 / 0.680424 (-0.530510) | 0.016877 / 0.534201 (-0.517324) | 0.358406 / 0.579283 (-0.220877) | 0.394349 / 0.434364 (-0.040015) | 0.422471 / 0.540337 (-0.117866) | 0.513807 / 1.386936 (-0.873129) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b9ce11d1b94e6178df663ff5fcad029849d10fb \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006272 / 0.011353 (-0.005080) | 0.003903 / 0.011008 (-0.007105) | 0.100180 / 0.038508 (0.061672) | 0.037799 / 0.023109 (0.014690) | 0.385627 / 0.275898 (0.109729) | 0.446518 / 0.323480 (0.123038) | 0.004811 / 0.007986 (-0.003175) | 0.003032 / 0.004328 (-0.001296) | 0.077063 / 0.004250 (0.072812) | 0.055564 / 0.037052 (0.018512) | 0.397346 / 0.258489 (0.138857) | 0.443242 / 0.293841 (0.149401) | 0.027904 / 0.128546 (-0.100642) | 0.008386 / 0.075646 (-0.067260) | 0.315013 / 0.419271 (-0.104259) | 0.047943 / 0.043533 (0.004410) | 0.378443 / 0.255139 (0.123304) | 0.411472 / 0.283200 (0.128272) | 0.020465 / 0.141683 (-0.121218) | 1.526594 / 1.452155 (0.074439) | 1.547018 / 1.492716 (0.054301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219377 / 0.018006 (0.201370) | 0.430254 / 0.000490 (0.429764) | 0.003218 / 0.000200 (0.003018) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023667 / 0.037411 (-0.013744) | 0.099143 / 0.014526 (0.084617) | 0.106044 / 0.176557 (-0.070513) | 0.166186 / 0.737135 (-0.570949) | 0.108736 / 0.296338 (-0.187603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437971 / 0.215209 (0.222762) | 4.363675 / 2.077655 (2.286021) | 2.011993 / 1.504120 (0.507873) | 1.845189 / 1.541195 (0.303994) | 1.831848 / 1.468490 (0.363358) | 0.562402 / 4.584777 (-4.022375) | 3.365259 / 3.745712 (-0.380453) | 1.781491 / 5.269862 (-3.488371) | 1.023454 / 4.565676 (-3.542223) | 0.067857 / 0.424275 (-0.356418) | 0.011076 / 0.007607 (0.003469) | 0.532267 / 0.226044 (0.306223) | 5.340344 / 2.268929 (3.071415) | 2.388649 / 55.444624 (-53.055976) | 2.055373 / 6.876477 (-4.821104) | 2.205047 / 2.142072 (0.062975) | 0.672909 / 4.805227 (-4.132318) | 0.135244 / 6.500664 (-6.365420) | 0.066184 / 0.075469 (-0.009285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206838 / 1.841788 (-0.634950) | 13.967075 / 8.074308 (5.892767) | 13.143971 / 10.191392 (2.952579) | 0.143991 / 0.680424 (-0.536433) | 0.016673 / 0.534201 (-0.517527) | 0.376180 / 0.579283 (-0.203103) | 0.386550 / 0.434364 (-0.047814) | 0.440590 / 0.540337 (-0.099747) | 0.529974 / 1.386936 (-0.856962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003784 / 0.011008 (-0.007224) | 0.077875 / 0.038508 (0.039367) | 0.038689 / 0.023109 (0.015580) | 0.421684 / 0.275898 (0.145786) | 0.472649 / 0.323480 (0.149169) | 0.003570 / 0.007986 (-0.004415) | 0.004448 / 0.004328 (0.000120) | 0.077867 / 0.004250 (0.073616) | 0.049514 / 0.037052 (0.012462) | 0.375983 / 0.258489 (0.117494) | 0.470632 / 0.293841 (0.176791) | 0.028238 / 0.128546 (-0.100308) | 0.008462 / 0.075646 (-0.067185) | 0.082452 / 0.419271 (-0.336819) | 0.043617 / 0.043533 (0.000084) | 0.400874 / 0.255139 (0.145735) | 0.426191 / 0.283200 (0.142992) | 0.020602 / 0.141683 (-0.121081) | 1.567658 / 1.452155 (0.115504) | 1.572610 / 1.492716 (0.079893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246144 / 0.018006 (0.228138) | 0.419402 / 0.000490 (0.418913) | 0.001691 / 0.000200 (0.001491) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026105 / 0.037411 (-0.011306) | 0.104734 / 0.014526 (0.090208) | 0.110257 / 0.176557 (-0.066300) | 0.161429 / 0.737135 (-0.575706) | 0.114367 / 0.296338 (-0.181972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453352 / 0.215209 (0.238143) | 4.537924 / 2.077655 (2.460269) | 2.196193 / 1.504120 (0.692073) | 2.002087 / 1.541195 (0.460892) | 2.041722 / 1.468490 (0.573231) | 0.561643 / 4.584777 (-4.023134) | 3.449108 / 3.745712 (-0.296605) | 2.862800 / 5.269862 (-2.407062) | 1.387895 / 4.565676 (-3.177782) | 0.068076 / 0.424275 (-0.356199) | 0.011568 / 0.007607 (0.003961) | 0.559279 / 0.226044 (0.333235) | 5.598738 / 2.268929 (3.329809) | 2.676649 / 55.444624 (-52.767975) | 2.334588 / 6.876477 (-4.541889) | 2.376215 / 2.142072 (0.234142) | 0.673109 / 4.805227 (-4.132118) | 0.137587 / 6.500664 (-6.363077) | 0.069131 / 0.075469 (-0.006338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307332 / 1.841788 (-0.534456) | 14.536036 / 8.074308 (6.461728) | 14.173734 / 10.191392 (3.982342) | 0.145143 / 0.680424 (-0.535281) | 0.016662 / 0.534201 (-0.517539) | 0.366901 / 0.579283 (-0.212383) | 0.394498 / 0.434364 (-0.039866) | 0.430546 / 0.540337 (-0.109792) | 0.518950 / 1.386936 (-0.867986) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008122 / 0.011353 (-0.003231) | 0.005585 / 0.011008 (-0.005424) | 0.121219 / 0.038508 (0.082711) | 0.047616 / 0.023109 (0.024507) | 0.440576 / 0.275898 (0.164678) | 0.491053 / 0.323480 (0.167573) | 0.004774 / 0.007986 (-0.003211) | 0.006758 / 0.004328 (0.002430) | 0.103852 / 0.004250 (0.099602) | 0.071560 / 0.037052 (0.034508) | 0.463107 / 0.258489 (0.204618) | 0.516904 / 0.293841 (0.223063) | 0.048052 / 0.128546 (-0.080494) | 0.013679 / 0.075646 (-0.061968) | 0.428383 / 0.419271 (0.009112) | 0.069468 / 0.043533 (0.025936) | 0.432593 / 0.255139 (0.177454) | 0.471810 / 0.283200 (0.188611) | 0.037541 / 0.141683 (-0.104142) | 1.823490 / 1.452155 (0.371335) | 1.922558 / 1.492716 (0.429842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252315 / 0.018006 (0.234309) | 0.541757 / 0.000490 (0.541267) | 0.000373 / 0.000200 (0.000173) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030361 / 0.037411 (-0.007050) | 0.125928 / 0.014526 (0.111402) | 0.145102 / 0.176557 (-0.031455) | 0.209798 / 0.737135 (-0.527337) | 0.147349 / 0.296338 (-0.148990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627554 / 0.215209 (0.412345) | 5.917422 / 2.077655 (3.839767) | 2.491083 / 1.504120 (0.986963) | 2.147078 / 1.541195 (0.605883) | 2.167511 / 1.468490 (0.699021) | 0.903061 / 4.584777 (-3.681716) | 5.518537 / 3.745712 (1.772825) | 2.654348 / 5.269862 (-2.615514) | 1.645121 / 4.565676 (-2.920556) | 0.103782 / 0.424275 (-0.320493) | 0.013048 / 0.007607 (0.005441) | 0.756732 / 0.226044 (0.530687) | 7.622873 / 2.268929 (5.353945) | 3.122689 / 55.444624 (-52.321936) | 2.537735 / 6.876477 (-4.338742) | 2.640090 / 2.142072 (0.498018) | 1.128635 / 4.805227 (-3.676593) | 0.228089 / 6.500664 (-6.272575) | 0.086207 / 0.075469 (0.010738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561591 / 1.841788 (-0.280197) | 18.110299 / 8.074308 (10.035991) | 20.718017 / 10.191392 (10.526625) | 0.225741 / 0.680424 (-0.454682) | 0.031738 / 0.534201 (-0.502463) | 0.530789 / 0.579283 (-0.048495) | 0.607364 / 0.434364 (0.173000) | 0.581593 / 0.540337 (0.041256) | 0.726033 / 1.386936 (-0.660903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009323 / 0.011353 (-0.002030) | 0.005360 / 0.011008 (-0.005649) | 0.103608 / 0.038508 (0.065100) | 0.050158 / 0.023109 (0.027049) | 0.499906 / 0.275898 (0.224008) | 0.561005 / 0.323480 (0.237525) | 0.005093 / 0.007986 (-0.002892) | 0.008285 / 0.004328 (0.003956) | 0.103446 / 0.004250 (0.099196) | 0.061478 / 0.037052 (0.024426) | 0.494016 / 0.258489 (0.235527) | 0.537550 / 0.293841 (0.243709) | 0.048829 / 0.128546 (-0.079717) | 0.017032 / 0.075646 (-0.058614) | 0.107748 / 0.419271 (-0.311524) | 0.065607 / 0.043533 (0.022074) | 0.488709 / 0.255139 (0.233570) | 0.512023 / 0.283200 (0.228823) | 0.032067 / 0.141683 (-0.109616) | 1.907585 / 1.452155 (0.455431) | 1.960994 / 1.492716 (0.468278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278378 / 0.018006 (0.260371) | 0.551474 / 0.000490 (0.550985) | 0.006886 / 0.000200 (0.006686) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030674 / 0.037411 (-0.006737) | 0.135179 / 0.014526 (0.120654) | 0.133703 / 0.176557 (-0.042853) | 0.198923 / 0.737135 (-0.538212) | 0.155108 / 0.296338 (-0.141231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.690566 / 0.215209 (0.475357) | 6.789594 / 2.077655 (4.711940) | 2.940668 / 1.504120 (1.436549) | 2.562431 / 1.541195 (1.021236) | 2.554232 / 1.468490 (1.085742) | 0.888470 / 4.584777 (-3.696307) | 5.672318 / 3.745712 (1.926606) | 2.741626 / 5.269862 (-2.528236) | 1.818336 / 4.565676 (-2.747340) | 0.110434 / 0.424275 (-0.313841) | 0.014114 / 0.007607 (0.006507) | 0.830632 / 0.226044 (0.604588) | 8.270787 / 2.268929 (6.001859) | 3.723486 / 55.444624 (-51.721139) | 2.993671 / 6.876477 (-3.882806) | 2.918273 / 2.142072 (0.776201) | 1.105337 / 4.805227 (-3.699891) | 0.222976 / 6.500664 (-6.277688) | 0.085290 / 0.075469 (0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.816027 / 1.841788 (-0.025760) | 18.496850 / 8.074308 (10.422541) | 20.457032 / 10.191392 (10.265640) | 0.243533 / 0.680424 (-0.436891) | 0.027044 / 0.534201 (-0.507157) | 0.500752 / 0.579283 (-0.078531) | 0.620963 / 0.434364 (0.186599) | 0.607995 / 0.540337 (0.067658) | 0.722915 / 1.386936 (-0.664021) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n"
] | "2023-06-22T18:23:11Z" | "2023-06-22T18:40:24Z" | "2023-06-22T18:30:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5978",
"merged_at": "2023-06-22T18:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5978"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5978/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5479/comments | https://api.github.com/repos/huggingface/datasets/issues/5479/events | https://github.com/huggingface/datasets/issues/5479 | 1,560,357,590 | I_kwDODunzps5dASrW | 5,479 | audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | {
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcho19",
"id": 107211437,
"login": "jcho19",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"repos_url": "https://api.github.com/users/jcho19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcho19"
} | [] | closed | false | null | [] | null | [] | "2023-01-27T20:01:22Z" | "2023-01-29T05:23:14Z" | "2023-01-29T05:23:14Z" | NONE | null | null | null | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1.
from datasets import load_dataset
ds = load_dataset("audiofolder", data_dir="...")
Here is the output (should be generating 400+ rows):
Downloading and preparing dataset audiofolder/default to ...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env):
Package Version
------------------- -------------------
aiofiles 22.1.0
aiohttp 3.8.3
aiosignal 1.3.1
altair 4.2.1
anyio 3.6.2
appdirs 1.4.4
argcomplete 2.0.0
argon2-cffi 20.1.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
audioread 3.0.0
backcall 0.2.0
bleach 4.0.0
certifi 2021.10.8
cffi 1.14.6
charset-normalizer 2.0.12
click 8.1.3
contourpy 1.0.7
cycler 0.11.0
datasets 2.9.0
debugpy 1.4.1
decorator 5.0.9
defusedxml 0.7.1
dill 0.3.6
distlib 0.3.4
entrypoints 0.3
evaluate 0.4.0
expecttest 0.1.3
fastapi 0.89.1
ffmpy 0.3.0
filelock 3.6.0
fonttools 4.38.0
frozenlist 1.3.3
fsspec 2023.1.0
future 0.18.2
gradio 3.16.2
h11 0.14.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.12.0
idna 3.3
ipykernel 6.2.0
ipython 7.26.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 3.0.1
jiwer 2.5.1
joblib 1.2.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kiwisolver 1.4.4
Levenshtein 0.20.2
librosa 0.9.2
linkify-it-py 1.0.3
llvmlite 0.39.1
markdown-it-py 2.1.0
MarkupSafe 2.0.1
matplotlib 3.6.3
matplotlib-inline 0.1.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mistune 0.8.4
multidict 6.0.4
multiprocess 0.70.14
nbclient 0.5.4
nbconvert 6.1.0
nbformat 5.1.3
nest-asyncio 1.5.1
notebook 6.4.3
numba 0.56.4
numpy 1.20.3
orjson 3.8.5
packaging 21.0
pandas 1.5.3
pandocfilters 1.4.3
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pipx 1.1.0
platformdirs 2.5.2
pooch 1.6.0
prometheus-client 0.11.0
prompt-toolkit 3.0.19
psutil 5.9.0
ptyprocess 0.7.0
pyarrow 10.0.1
pycparser 2.20
pycryptodome 3.16.0
pydantic 1.10.4
pydub 0.25.1
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
python-dateutil 2.8.2
python-multipart 0.0.5
pytz 2022.7.1
PyYAML 6.0
pyzmq 22.2.1
qtconsole 5.1.1
QtPy 1.10.0
rapidfuzz 2.13.7
regex 2022.10.31
requests 2.27.1
resampy 0.4.2
responses 0.18.0
rfc3986 1.5.0
scikit-learn 1.2.1
scipy 1.6.3
Send2Trash 1.8.0
setuptools 65.5.1
shiboken6 6.3.1
shiboken6-generator 6.3.1
six 1.16.0
sniffio 1.3.0
soundfile 0.11.0
starlette 0.22.0
terminado 0.11.0
testpath 0.5.0
threadpoolctl 3.1.0
tokenizers 0.13.2
toolz 0.12.0
torch 1.11.0a0+gitunknown
tornado 6.1
tqdm 4.64.1
traitlets 5.0.5
transformers 4.27.0.dev0
types-dataclasses 0.6.4
typing_extensions 4.1.1
uc-micro-py 1.0.1
urllib3 1.26.9
userpath 1.8.0
uvicorn 0.20.0
virtualenv 20.14.1
wcwidth 0.2.5
webencodings 0.5.1
websockets 10.4
wheel 0.37.1
widgetsnbextension 3.5.1
xxhash 3.2.0
yarl 1.8.2
### Steps to reproduce the bug
Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above).
Create a custom audio dataset and load it in with load_dataset("audiofolder", ...)
### Expected behavior
load_dataset should create a dataset with 400+ rows.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.0
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5479/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/96 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/96/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/96/comments | https://api.github.com/repos/huggingface/datasets/issues/96/events | https://github.com/huggingface/datasets/pull/96 | 617,739,521 | MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4 | 96 | lm1b | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jplu",
"id": 959590,
"login": "jplu",
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"repos_url": "https://api.github.com/users/jplu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jplu"
} | [] | closed | false | null | [] | null | [
"I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..."
] | "2020-05-13T20:38:44Z" | "2020-05-14T14:13:30Z" | "2020-05-14T14:13:29Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/96.diff",
"html_url": "https://github.com/huggingface/datasets/pull/96",
"merged_at": "2020-05-14T14:13:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/96.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/96"
} | Add lm1b dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/96/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/96/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5752/comments | https://api.github.com/repos/huggingface/datasets/issues/5752/events | https://github.com/huggingface/datasets/issues/5752 | 1,668,574,209 | I_kwDODunzps5jdGwB | 5,752 | Streaming dataset looses `.feature` method after `.add_column` | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"I believe the issue resides in this line:\r\nhttps://github.com/huggingface/datasets/blob/7c3a9b057c476c40d157bd7a5d57f49066239df0/src/datasets/iterable_dataset.py#L1415\r\n\r\nIf we pass the **new** features of the dataset to the `.map` method we can return the features after adding a column, e.g.:\r\n```python\r\nfrom datasets import load_dataset, Value\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\nprint(original_dataset.features.keys())\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn = [\"some random text\" for _ in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = Value(dtype=\"string\", id=None) # I know the correct column type for this feature\r\n\r\ndef add_column_fn(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column[idx]}\r\n\r\nmodified_dataset = original_dataset.map(add_column_fn, with_indices=True, features=new_features)\r\n\r\nprint(modified_dataset.features.keys())\r\n```\r\n**Print Output:**\r\n```\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])\r\n```\r\n"
] | "2023-04-14T16:39:50Z" | "2023-04-14T17:46:54Z" | null | CONTRIBUTOR | null | null | null | ### Describe the bug
After appending a new column to a streaming dataset using `.add_column`, we can no longer access the list of dataset features using the `.feature` method.
### Steps to reproduce the bug
```python
from datasets import load_dataset
original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
print(original_dataset.features.keys())
# now add a new column to our streaming dataset
modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)])
print(modified_dataset.features.keys())
```
**Print Output:**
```
dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 8
6 # now add a new column to our streaming dataset
7 modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)])
----> 8 print(modified_dataset.features.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
We see that we get the features for the original dataset, but not the modified one with the added column.
### Expected behavior
Features should be persevered after adding a new column, i.e. calling:
```python
print(modified_dataset.features.keys())
```
Should return:
```
dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])
```
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5752/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5891/comments | https://api.github.com/repos/huggingface/datasets/issues/5891/events | https://github.com/huggingface/datasets/pull/5891 | 1,722,384,135 | PR_kwDODunzps5RKchn | 5,891 | Make split slicing consisten with list slicing | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5891). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006916 / 0.011353 (-0.004437) | 0.004749 / 0.011008 (-0.006259) | 0.096086 / 0.038508 (0.057578) | 0.035448 / 0.023109 (0.012338) | 0.299645 / 0.275898 (0.023747) | 0.331279 / 0.323480 (0.007799) | 0.006018 / 0.007986 (-0.001968) | 0.004210 / 0.004328 (-0.000118) | 0.072998 / 0.004250 (0.068747) | 0.050082 / 0.037052 (0.013030) | 0.297714 / 0.258489 (0.039225) | 0.365523 / 0.293841 (0.071682) | 0.028081 / 0.128546 (-0.100465) | 0.009072 / 0.075646 (-0.066574) | 0.327628 / 0.419271 (-0.091643) | 0.051165 / 0.043533 (0.007633) | 0.295091 / 0.255139 (0.039952) | 0.320052 / 0.283200 (0.036852) | 0.109841 / 0.141683 (-0.031842) | 1.467867 / 1.452155 (0.015712) | 1.572600 / 1.492716 (0.079884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281490 / 0.018006 (0.263484) | 0.499259 / 0.000490 (0.498770) | 0.000691 / 0.000200 (0.000491) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027548 / 0.037411 (-0.009863) | 0.106592 / 0.014526 (0.092066) | 0.118654 / 0.176557 (-0.057902) | 0.174313 / 0.737135 (-0.562822) | 0.124491 / 0.296338 (-0.171848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399674 / 0.215209 (0.184465) | 3.984092 / 2.077655 (1.906437) | 1.790935 / 1.504120 (0.286815) | 1.593612 / 1.541195 (0.052417) | 1.694595 / 1.468490 (0.226105) | 0.517588 / 4.584777 (-4.067189) | 3.724353 / 3.745712 (-0.021359) | 3.244807 / 5.269862 (-2.025054) | 1.602929 / 4.565676 (-2.962748) | 0.065334 / 0.424275 (-0.358941) | 0.012259 / 0.007607 (0.004652) | 0.501355 / 0.226044 (0.275311) | 4.996546 / 2.268929 (2.727618) | 2.279333 / 55.444624 (-53.165291) | 1.940126 / 6.876477 (-4.936351) | 2.122945 / 2.142072 (-0.019128) | 0.626104 / 4.805227 (-4.179123) | 0.141278 / 6.500664 (-6.359386) | 0.064522 / 0.075469 (-0.010947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195351 / 1.841788 (-0.646436) | 15.258932 / 8.074308 (7.184624) | 14.627623 / 10.191392 (4.436231) | 0.266897 / 0.680424 (-0.413527) | 0.017557 / 0.534201 (-0.516644) | 0.392932 / 0.579283 (-0.186351) | 0.416409 / 0.434364 (-0.017955) | 0.469100 / 0.540337 (-0.071237) | 0.556247 / 1.386936 (-0.830689) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006880 / 0.011353 (-0.004473) | 0.004837 / 0.011008 (-0.006171) | 0.074518 / 0.038508 (0.036010) | 0.034204 / 0.023109 (0.011095) | 0.365100 / 0.275898 (0.089202) | 0.394976 / 0.323480 (0.071496) | 0.006364 / 0.007986 (-0.001621) | 0.004269 / 0.004328 (-0.000060) | 0.073531 / 0.004250 (0.069281) | 0.051334 / 0.037052 (0.014281) | 0.373904 / 0.258489 (0.115415) | 0.413662 / 0.293841 (0.119821) | 0.028779 / 0.128546 (-0.099767) | 0.009292 / 0.075646 (-0.066354) | 0.081574 / 0.419271 (-0.337698) | 0.046531 / 0.043533 (0.002998) | 0.368995 / 0.255139 (0.113856) | 0.376938 / 0.283200 (0.093739) | 0.112576 / 0.141683 (-0.029107) | 1.458880 / 1.452155 (0.006725) | 1.550918 / 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319521 / 0.018006 (0.301515) | 0.510146 / 0.000490 (0.509656) | 0.000438 / 0.000200 (0.000238) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033082 / 0.037411 (-0.004329) | 0.118009 / 0.014526 (0.103483) | 0.127108 / 0.176557 (-0.049448) | 0.176600 / 0.737135 (-0.560535) | 0.133790 / 0.296338 (-0.162549) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437360 / 0.215209 (0.222151) | 4.367426 / 2.077655 (2.289771) | 2.193646 / 1.504120 (0.689526) | 2.025002 / 1.541195 (0.483808) | 2.142347 / 1.468490 (0.673856) | 0.525497 / 4.584777 (-4.059280) | 3.751275 / 3.745712 (0.005563) | 1.912271 / 5.269862 (-3.357590) | 1.087286 / 4.565676 (-3.478390) | 0.066328 / 0.424275 (-0.357947) | 0.011904 / 0.007607 (0.004297) | 0.545870 / 0.226044 (0.319825) | 5.434481 / 2.268929 (3.165552) | 2.719745 / 55.444624 (-52.724880) | 2.445001 / 6.876477 (-4.431476) | 2.500205 / 2.142072 (0.358133) | 0.645735 / 4.805227 (-4.159492) | 0.144210 / 6.500664 (-6.356455) | 0.065688 / 0.075469 (-0.009781) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273522 / 1.841788 (-0.568265) | 15.771778 / 8.074308 (7.697470) | 14.685261 / 10.191392 (4.493869) | 0.176523 / 0.680424 (-0.503900) | 0.017877 / 0.534201 (-0.516324) | 0.392687 / 0.579283 (-0.186596) | 0.449992 / 0.434364 (0.015628) | 0.462851 / 0.540337 (-0.077487) | 0.560178 / 1.386936 (-0.826758) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0fa3ef6eba906ee1214e0596d15a78fc358909f4 \"CML watermark\")\n"
] | "2023-05-23T16:04:33Z" | "2023-05-23T16:11:12Z" | null | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891"
} | Fix #1774, fix #5875
TODO: a test | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5891/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2090/comments | https://api.github.com/repos/huggingface/datasets/issues/2090/events | https://github.com/huggingface/datasets/pull/2090 | 836,807,498 | MDExOlB1bGxSZXF1ZXN0NTk3MjgwNTEy | 2,090 | Add machine translated multilingual STS benchmark dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
} | [] | closed | false | null | [] | null | [
"Hello dear maintainer, are there any comments or questions about this PR?",
"@iamollas thanks for the feedback. I did not see the template.\r\nI improved it...",
"Should be clean for merge IMO.",
"@lhoestq CI is green. ;-)",
"Thanks again ! this is awesome :)",
"Thanks for merging. :-)"
] | "2021-03-20T13:28:07Z" | "2021-03-29T13:24:42Z" | "2021-03-29T13:00:15Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2090.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2090",
"merged_at": "2021-03-29T13:00:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2090.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2090"
} | also see here https://github.com/PhilipMay/stsb-multi-mt | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2090/timeline | null | null | true |