url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
600M
2.05B
node_id
stringlengths
18
32
number
int64
2
6.51k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/18
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/18/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/18/comments
https://api.github.com/repos/huggingface/datasets/issues/18/events
https://github.com/huggingface/datasets/pull/18
606,109,196
MDExOlB1bGxSZXF1ZXN0NDA4Mzg0MTc3
18
Updating caching mechanism - Allow dependency in dataset processing scripts - Fix style and quality in the repo
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "LGTM" ]
"2020-04-24T07:39:48Z"
"2020-04-29T15:27:28Z"
"2020-04-28T16:06:28Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/18.diff", "html_url": "https://github.com/huggingface/datasets/pull/18", "merged_at": "2020-04-28T16:06:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/18.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/18" }
This PR has a lot of content (might be hard to review, sorry, in particular because I fixed the style in the repo at the same time). # Style & quality: You can now install the style and quality tools with `pip install -e .[quality]`. This will install black, the compatible version of sort and flake8. You can then clean the style and check the quality before merging your PR with: ```bash make style make quality ``` # Allow dependencies in dataset processing scripts We can now allow (some level) of imports in dataset processing scripts (in addition to PyPi imports). Namely, you can do the two following things: Import from a relative path to a file in the same folder as the dataset processing script: ```python import .c4_utils ``` Or import from a relative path to a file in a folder/archive/github repo to which you provide an URL after the import state with `# From: [URL]`: ```python import .clicr.dataset_code.build_json_dataset # From: https://github.com/clips/clicr ``` In both these cases, after downloading the main dataset processing script, we will identify the location of these dependencies, download them and copy them in the dataset processing script folder. Note that only direct import in the dataset processing script will be handled. We don't recursively explore the additional import to download further files. Also, when we download from an additional directory (in the second case above), we recursively add `__init__.py` to all the sub-folder so you can import from them. This part is still tested for now. If you've seen datasets which required external utilities, tell me and I can test it. # Update the cache to have a better local structure The local structure in the `src/datasets` folder is now: `src/datasets/DATASET_NAME/DATASET_HASH/*` The hash is computed from the full code of the dataset processing script as well as all the local and downloaded dependencies as mentioned above. This way if you change some code in a utility related to your dataset, a new hash should be computed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/18/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/18/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4012/comments
https://api.github.com/repos/huggingface/datasets/issues/4012/events
https://github.com/huggingface/datasets/pull/4012
1,180,350,083
PR_kwDODunzps40_qgo
4,012
Rename wer to cer
{ "avatar_url": "https://avatars.githubusercontent.com/u/28428143?v=4", "events_url": "https://api.github.com/users/pmgautam/events{/privacy}", "followers_url": "https://api.github.com/users/pmgautam/followers", "following_url": "https://api.github.com/users/pmgautam/following{/other_user}", "gists_url": "https://api.github.com/users/pmgautam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pmgautam", "id": 28428143, "login": "pmgautam", "node_id": "MDQ6VXNlcjI4NDI4MTQz", "organizations_url": "https://api.github.com/users/pmgautam/orgs", "received_events_url": "https://api.github.com/users/pmgautam/received_events", "repos_url": "https://api.github.com/users/pmgautam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pmgautam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pmgautam/subscriptions", "type": "User", "url": "https://api.github.com/users/pmgautam" }
[]
closed
false
null
[]
null
[]
"2022-03-25T05:06:05Z"
"2022-03-28T13:57:25Z"
"2022-03-28T13:57:25Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4012.diff", "html_url": "https://github.com/huggingface/datasets/pull/4012", "merged_at": "2022-03-28T13:57:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/4012.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4012" }
wer variable changed to cer in README file
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4012/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4012/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2366/comments
https://api.github.com/repos/huggingface/datasets/issues/2366/events
https://github.com/huggingface/datasets/issues/2366
893,185,266
MDU6SXNzdWU4OTMxODUyNjY=
2,366
Json loader fails if user-specified features don't match the json data fields order
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2021-05-17T10:26:08Z"
"2021-06-16T10:47:49Z"
"2021-06-16T10:47:49Z"
MEMBER
null
null
null
If you do ```python dataset = load_dataset("json", data_files=data_files, features=features) ``` Then depending on the order of the features in the json data field it fails: ```python [...] ~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 94 if self.config.schema: 95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT ---> 96 pa_table = pa_table.cast(self.config.schema) 97 yield i, pa_table [...] ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens'] ``` This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast. One way to fix the `cast` would be to replace it with: ```python # reorder the arrays if necessary + cast to schema # we can't simply use .cast here because we may need to change the order of the columns pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2366/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3248/comments
https://api.github.com/repos/huggingface/datasets/issues/3248/events
https://github.com/huggingface/datasets/pull/3248
1,050,171,082
PR_kwDODunzps4uXZzU
3,248
Stream from Google Drive and other hosts
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I just tried some datasets and noticed that `spider` is not working for some reason (the compression type is not recognized), resulting in FileNotFoundError. I can take a look tomorrow", "I'm fixing the remaining files based on TAR archives", "THANKS A LOT" ]
"2021-11-10T18:32:32Z"
"2021-11-30T16:03:43Z"
"2021-11-12T17:18:11Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3248.diff", "html_url": "https://github.com/huggingface/datasets/pull/3248", "merged_at": "2021-11-12T17:18:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/3248.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3248" }
Streaming from Google Drive is a bit more challenging than the other host we've been supporting: - the download URL must be updated to add the confirm token obtained by HEAD request - it requires to use cookies to keep the connection alive - the URL doesn't tell any information about whether the file is compressed or not Therefore I did two things: - I added a step for URL and headers/cookies preparation in the StreamingDownloadManager - I added automatic compression type inference by reading the [magic number](https://en.wikipedia.org/wiki/List_of_file_signatures) This allows to do do fancy things like ```python from datasets.utils.streaming_download_manager import StreamingDownloadManager, xopen, xjoin, xglob # zip file containing a train.tsv file url = "https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh" extracted = StreamingDownloadManager().download_and_extract(url) for inner_file in xglob(xjoin(extracted, "*.tsv")): with xopen(inner_file) as f: # streaming starts here for line in f: print(line) ``` This should make around 80 datasets streamable. It concerns those hosted on Google Drive but also any dataset for which the URL doesn't give any information about compression. Here is the full list: ``` amazon_polarity, ami, arabic_billion_words, ascent_kb, asset, big_patent, billsum, capes, cmrc2018, cnn_dailymail, code_x_glue_cc_code_completion_token, code_x_glue_cc_code_refinement, code_x_glue_cc_code_to_code_trans, code_x_glue_tt_text_to_text, conll2002, craigslist_bargains, dbpedia_14, docred, ehealth_kd, emo, euronews, germeval_14, gigaword, grail_qa, great_code, has_part, head_qa, health_fact, hope_edi, id_newspapers_2018, igbo_english_machine_translation, irc_disentangle, jfleg, jnlpba, journalists_questions, kor_ner, linnaeus, med_hop, mrqa, mt_eng_vietnamese, multi_news, norwegian_ner, offcombr, offenseval_dravidian, para_pat, peoples_daily_ner, pn_summary, poleval2019_mt, pubmed_qa, qangaroo, reddit_tifu, refresd, ro_sts_parallel, russian_super_glue, samsum, sberquad, scielo, search_qa, species_800, spider, squad_adversarial, tamilmixsentiment, tashkeela, ted_talks_iwslt, trec, turk, turkish_ner, twi_text_c3, universal_morphologies, web_of_science, weibo_ner, wiki_bio, wiki_hop, wiki_lingua, wiki_summary, wili_2018, wisesight1000, wnut_17, yahoo_answers_topics, yelp_review_full, yoruba_text_c3 ``` Some of them may not work if the host doesn't support HTTP range requests for example Fix https://github.com/huggingface/datasets/issues/2742 Fix https://github.com/huggingface/datasets/issues/3188
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3248/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3248/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/345/comments
https://api.github.com/repos/huggingface/datasets/issues/345/events
https://github.com/huggingface/datasets/issues/345
651,761,201
MDU6SXNzdWU2NTE3NjEyMDE=
345
Supporting documents in ELI5
{ "avatar_url": "https://avatars.githubusercontent.com/u/29262273?v=4", "events_url": "https://api.github.com/users/saverymax/events{/privacy}", "followers_url": "https://api.github.com/users/saverymax/followers", "following_url": "https://api.github.com/users/saverymax/following{/other_user}", "gists_url": "https://api.github.com/users/saverymax/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saverymax", "id": 29262273, "login": "saverymax", "node_id": "MDQ6VXNlcjI5MjYyMjcz", "organizations_url": "https://api.github.com/users/saverymax/orgs", "received_events_url": "https://api.github.com/users/saverymax/received_events", "repos_url": "https://api.github.com/users/saverymax/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saverymax/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saverymax/subscriptions", "type": "User", "url": "https://api.github.com/users/saverymax" }
[]
closed
false
null
[]
null
[ "Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster:\r\nhttps://github.com/facebookresearch/ELI5#downloading-support-documents-from-the-commoncrawl\r\n\r\nIn order to make the task accessible to people who may not have access to this kind of infrastructure, we suggest to use Wikipedia as a knowledge source rather than the full CommonCrawl. The following blog post shows how you can create Wikipedia support documents and get a performance that is on par with a system that uses CommonCrawl pages.\r\nhttps://yjernite.github.io/lfqa.html#task_description\r\n\r\nHope that helps, using ElasticSearch to index Wiki40b and create the documents should take about 4 hours. Let us know if you have any trouble with the blog post though!", "Hi, thanks for the quick response. The blog post is quite an interesting working example, thanks for sharing it.\r\nTwo follow-up points/questions about my original question:\r\n\r\n1. Yes, I read that the facebook team could not share the CommonCrawl b/c of licensing reasons. They state \"No, we are not allowed to host processed Reddit or CommonCrawl data,\" which indicates they could also not share the Reddit data for licensing reasons. But it seems that HuggingFace is able to share the Reddit data, so why not a subset of CommonCrawl?\r\n\r\n2. Thanks for the suggestion about ElasticSearch and Wiki40b. This is good to know about performance. I definitely could do the indexing and querying myself. What I like about the ELI5 dataset though, at least what is suggested by the paper, is that to create the dataset they had already selected the top 100 web sources and made a single support document from those. Though it doesn't appear to be too sophisticated an approach, having a single support document pre-computed (without having to run the facebook code or a replacement with another dataset) is super useful for my work, especially since I'm not working on developing the latest and greatest retrieval model. Of course, I don't expect HF NLP datasets to be perfectly tailored to my use-case. I know there is overhead to any project, I'm just illustrating a use-case of ELI5 which is not possible with the data provided as-is. If it's for licensing reasons, that is perfectly acceptable a reason, and I appreciate your response." ]
"2020-07-06T19:14:13Z"
"2020-10-27T15:38:45Z"
"2020-10-27T15:38:45Z"
NONE
null
null
null
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least. If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :(
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/345/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/345/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4599/comments
https://api.github.com/repos/huggingface/datasets/issues/4599/events
https://github.com/huggingface/datasets/pull/4599
1,288,849,933
PR_kwDODunzps46kvfC
4,599
Smooth-BLEU bug fixed
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun" }
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
closed
false
null
[]
null
[ "Thanks @Aktsvigun for your fix.\r\n\r\nHowever, metrics in `datasets` are in deprecation mode:\r\n- #4739\r\n\r\nYou should transfer this PR to the `evaluate` library: https://github.com/huggingface/evaluate\r\n\r\nJust for context, here the link to the PR by @Aktsvigun on tensorflow/nmt:\r\n- https://github.com/tensorflow/nmt/pull/488" ]
"2022-06-29T14:51:42Z"
"2022-09-23T07:42:40Z"
"2022-09-23T07:42:40Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4599.diff", "html_url": "https://github.com/huggingface/datasets/pull/4599", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4599.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4599" }
Hi, the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image). This however contradicts the source paper suggesting the smooth-BLEU _(Chin-Yew Lin, Franz Josef Och. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. COLING 2004.)_ : > Add one count to the n-gram hit and total ngram count for n > 1. Therefore, for candidate translations with less than n words, they can still get a positive smoothed BLEU score from shorter n-gram matches; however if nothing matches then they will get zero scores. This pull request aims at fixing this bug. I made a pull request in the target repository `tensorflow/nmt`, which implements this script, yet the last commit there is dating 19.02.2019 and I doubt whether this will be fixed promptly. Yet, this bug is critical, for instance for summarization datasets with short summaries (e.g. AESLC), since smoothing needs to be applied there. Therefore, the easiest solution that I found is to fork the repo and download this script directly from the forked fixed repo. Kind, Akim Tsvigun <img width="516" alt="Снимок экрана 2022-06-29 в 17 49 27" src="https://user-images.githubusercontent.com/36672861/176466935-ac579e6d-6a93-4111-ab41-9b33056e7d47.png">
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4599/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4599/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4266/comments
https://api.github.com/repos/huggingface/datasets/issues/4266/events
https://github.com/huggingface/datasets/pull/4266
1,223,116,436
PR_kwDODunzps43LeXK
4,266
Add HF Speech Bench to Librispeech Dataset Card
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-02T16:59:31Z"
"2022-05-05T08:47:20Z"
"2022-05-05T08:40:09Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4266.diff", "html_url": "https://github.com/huggingface/datasets/pull/4266", "merged_at": "2022-05-05T08:40:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4266" }
Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions? cc @patrickvonplaten: more leaderboard promotion!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4266/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6243/comments
https://api.github.com/repos/huggingface/datasets/issues/6243/events
https://github.com/huggingface/datasets/pull/6243
1,898,532,784
PR_kwDODunzps5aclIy
6,243
Fix cast from fixed size list to variable size list
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006784 / 0.011353 (-0.004569) | 0.004051 / 0.011008 (-0.006957) | 0.083790 / 0.038508 (0.045282) | 0.081219 / 0.023109 (0.058110) | 0.313195 / 0.275898 (0.037297) | 0.336954 / 0.323480 (0.013475) | 0.004324 / 0.007986 (-0.003662) | 0.004516 / 0.004328 (0.000188) | 0.065051 / 0.004250 (0.060801) | 0.057647 / 0.037052 (0.020595) | 0.316675 / 0.258489 (0.058186) | 0.357936 / 0.293841 (0.064095) | 0.030980 / 0.128546 (-0.097566) | 0.008844 / 0.075646 (-0.066802) | 0.287027 / 0.419271 (-0.132245) | 0.052130 / 0.043533 (0.008597) | 0.308125 / 0.255139 (0.052986) | 0.337345 / 0.283200 (0.054145) | 0.025781 / 0.141683 (-0.115902) | 1.466161 / 1.452155 (0.014006) | 1.565824 / 1.492716 (0.073108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299112 / 0.018006 (0.281106) | 0.640520 / 0.000490 (0.640030) | 0.008846 / 0.000200 (0.008647) | 0.000273 / 0.000054 (0.000219) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029853 / 0.037411 (-0.007559) | 0.081697 / 0.014526 (0.067172) | 0.099110 / 0.176557 (-0.077447) | 0.155864 / 0.737135 (-0.581271) | 0.098749 / 0.296338 (-0.197590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385722 / 0.215209 (0.170512) | 3.851490 / 2.077655 (1.773835) | 1.851995 / 1.504120 (0.347875) | 1.660398 / 1.541195 (0.119204) | 1.769370 / 1.468490 (0.300879) | 0.481523 / 4.584777 (-4.103254) | 3.550449 / 3.745712 (-0.195263) | 3.424782 / 5.269862 (-1.845079) | 2.106470 / 4.565676 (-2.459206) | 0.056500 / 0.424275 (-0.367775) | 0.007891 / 0.007607 (0.000284) | 0.465564 / 0.226044 (0.239520) | 4.662892 / 2.268929 (2.393964) | 2.305424 / 55.444624 (-53.139201) | 1.980524 / 6.876477 (-4.895953) | 2.218423 / 2.142072 (0.076350) | 0.584662 / 4.805227 (-4.220565) | 0.132325 / 6.500664 (-6.368340) | 0.060773 / 0.075469 (-0.014696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254261 / 1.841788 (-0.587527) | 19.479805 / 8.074308 (11.405497) | 14.222687 / 10.191392 (4.031295) | 0.149829 / 0.680424 (-0.530595) | 0.018630 / 0.534201 (-0.515571) | 0.395284 / 0.579283 (-0.183999) | 0.413385 / 0.434364 (-0.020978) | 0.462931 / 0.540337 (-0.077406) | 0.645359 / 1.386936 (-0.741577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004306 / 0.011008 (-0.006702) | 0.065213 / 0.038508 (0.026705) | 0.082442 / 0.023109 (0.059332) | 0.411294 / 0.275898 (0.135396) | 0.452176 / 0.323480 (0.128696) | 0.005802 / 0.007986 (-0.002183) | 0.003556 / 0.004328 (-0.000772) | 0.066163 / 0.004250 (0.061913) | 0.060680 / 0.037052 (0.023628) | 0.416975 / 0.258489 (0.158486) | 0.456353 / 0.293841 (0.162512) | 0.033584 / 0.128546 (-0.094963) | 0.008687 / 0.075646 (-0.066959) | 0.071300 / 0.419271 (-0.347972) | 0.049382 / 0.043533 (0.005849) | 0.409329 / 0.255139 (0.154190) | 0.434829 / 0.283200 (0.151629) | 0.022966 / 0.141683 (-0.118716) | 1.493847 / 1.452155 (0.041692) | 1.582372 / 1.492716 (0.089656) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280578 / 0.018006 (0.262572) | 0.538122 / 0.000490 (0.537632) | 0.004515 / 0.000200 (0.004315) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033383 / 0.037411 (-0.004028) | 0.093426 / 0.014526 (0.078901) | 0.109314 / 0.176557 (-0.067242) | 0.162349 / 0.737135 (-0.574786) | 0.109849 / 0.296338 (-0.186490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431073 / 0.215209 (0.215864) | 4.311942 / 2.077655 (2.234287) | 2.291170 / 1.504120 (0.787051) | 2.132266 / 1.541195 (0.591072) | 2.236526 / 1.468490 (0.768036) | 0.492001 / 4.584777 (-4.092776) | 3.523013 / 3.745712 (-0.222699) | 3.413481 / 5.269862 (-1.856381) | 2.112979 / 4.565676 (-2.452698) | 0.058654 / 0.424275 (-0.365621) | 0.007729 / 0.007607 (0.000121) | 0.512027 / 0.226044 (0.285982) | 5.125264 / 2.268929 (2.856336) | 2.836281 / 55.444624 (-52.608344) | 2.447253 / 6.876477 (-4.429224) | 2.711908 / 2.142072 (0.569835) | 0.592598 / 4.805227 (-4.212629) | 0.134837 / 6.500664 (-6.365827) | 0.059813 / 0.075469 (-0.015656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373464 / 1.841788 (-0.468323) | 20.548983 / 8.074308 (12.474675) | 14.799833 / 10.191392 (4.608441) | 0.168601 / 0.680424 (-0.511823) | 0.020358 / 0.534201 (-0.513843) | 0.398790 / 0.579283 (-0.180494) | 0.416921 / 0.434364 (-0.017443) | 0.480542 / 0.540337 (-0.059795) | 0.645062 / 1.386936 (-0.741874) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#afd6fc193a91cb0461c8bf3b64db6943c23de846 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008616 / 0.011353 (-0.002737) | 0.004957 / 0.011008 (-0.006051) | 0.102629 / 0.038508 (0.064121) | 0.080492 / 0.023109 (0.057383) | 0.461817 / 0.275898 (0.185919) | 0.487964 / 0.323480 (0.164484) | 0.006336 / 0.007986 (-0.001649) | 0.004607 / 0.004328 (0.000278) | 0.074311 / 0.004250 (0.070061) | 0.060368 / 0.037052 (0.023315) | 0.458076 / 0.258489 (0.199587) | 0.493028 / 0.293841 (0.199187) | 0.044153 / 0.128546 (-0.084394) | 0.014066 / 0.075646 (-0.061581) | 0.369848 / 0.419271 (-0.049424) | 0.061690 / 0.043533 (0.018157) | 0.439728 / 0.255139 (0.184590) | 0.484706 / 0.283200 (0.201506) | 0.034657 / 0.141683 (-0.107026) | 1.710591 / 1.452155 (0.258437) | 1.900225 / 1.492716 (0.407509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.308837 / 0.018006 (0.290831) | 0.579561 / 0.000490 (0.579072) | 0.010163 / 0.000200 (0.009963) | 0.000613 / 0.000054 (0.000558) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028108 / 0.037411 (-0.009303) | 0.085072 / 0.014526 (0.070546) | 0.103375 / 0.176557 (-0.073182) | 0.173765 / 0.737135 (-0.563371) | 0.102460 / 0.296338 (-0.193879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602642 / 0.215209 (0.387433) | 5.582537 / 2.077655 (3.504882) | 2.405553 / 1.504120 (0.901434) | 2.057298 / 1.541195 (0.516103) | 2.223787 / 1.468490 (0.755297) | 0.846138 / 4.584777 (-3.738638) | 5.290306 / 3.745712 (1.544594) | 4.836066 / 5.269862 (-0.433795) | 2.951901 / 4.565676 (-1.613775) | 0.099432 / 0.424275 (-0.324843) | 0.009198 / 0.007607 (0.001591) | 0.731370 / 0.226044 (0.505325) | 6.663026 / 2.268929 (4.394098) | 3.200932 / 55.444624 (-52.243692) | 2.486654 / 6.876477 (-4.389823) | 2.833195 / 2.142072 (0.691123) | 0.989481 / 4.805227 (-3.815746) | 0.205176 / 6.500664 (-6.295488) | 0.073760 / 0.075469 (-0.001709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745494 / 1.841788 (-0.096294) | 24.649294 / 8.074308 (16.574986) | 22.312182 / 10.191392 (12.120790) | 0.245207 / 0.680424 (-0.435217) | 0.031971 / 0.534201 (-0.502230) | 0.495179 / 0.579283 (-0.084104) | 0.603233 / 0.434364 (0.168869) | 0.560906 / 0.540337 (0.020569) | 0.788292 / 1.386936 (-0.598644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.011353 (-0.002431) | 0.005203 / 0.011008 (-0.005805) | 0.074414 / 0.038508 (0.035906) | 0.077552 / 0.023109 (0.054443) | 0.547217 / 0.275898 (0.271319) | 0.625298 / 0.323480 (0.301818) | 0.006135 / 0.007986 (-0.001851) | 0.004163 / 0.004328 (-0.000165) | 0.078014 / 0.004250 (0.073764) | 0.064484 / 0.037052 (0.027431) | 0.562356 / 0.258489 (0.303867) | 0.643613 / 0.293841 (0.349772) | 0.050155 / 0.128546 (-0.078391) | 0.013665 / 0.075646 (-0.061981) | 0.090224 / 0.419271 (-0.329048) | 0.063852 / 0.043533 (0.020319) | 0.560914 / 0.255139 (0.305775) | 0.591531 / 0.283200 (0.308331) | 0.036491 / 0.141683 (-0.105192) | 1.670898 / 1.452155 (0.218743) | 1.783924 / 1.492716 (0.291208) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.312764 / 0.018006 (0.294758) | 0.611116 / 0.000490 (0.610626) | 0.006367 / 0.000200 (0.006167) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033967 / 0.037411 (-0.003445) | 0.101550 / 0.014526 (0.087025) | 0.116953 / 0.176557 (-0.059604) | 0.180061 / 0.737135 (-0.557075) | 0.115220 / 0.296338 (-0.181118) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.642110 / 0.215209 (0.426901) | 6.361381 / 2.077655 (4.283727) | 2.948175 / 1.504120 (1.444055) | 2.633935 / 1.541195 (1.092740) | 2.822150 / 1.468490 (1.353660) | 0.931412 / 4.584777 (-3.653365) | 5.428540 / 3.745712 (1.682828) | 4.672920 / 5.269862 (-0.596941) | 3.102046 / 4.565676 (-1.463630) | 0.100825 / 0.424275 (-0.323450) | 0.009464 / 0.007607 (0.001857) | 0.774102 / 0.226044 (0.548058) | 7.715003 / 2.268929 (5.446074) | 3.987807 / 55.444624 (-51.456817) | 3.089129 / 6.876477 (-3.787347) | 3.333247 / 2.142072 (1.191174) | 1.012427 / 4.805227 (-3.792800) | 0.200662 / 6.500664 (-6.300002) | 0.072422 / 0.075469 (-0.003047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.680364 / 1.841788 (-0.161424) | 24.484576 / 8.074308 (16.410268) | 21.920990 / 10.191392 (11.729598) | 0.218604 / 0.680424 (-0.461820) | 0.035818 / 0.534201 (-0.498383) | 0.470648 / 0.579283 (-0.108635) | 0.585108 / 0.434364 (0.150744) | 0.539152 / 0.540337 (-0.001185) | 0.763999 / 1.386936 (-0.622937) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cfed1d09ed6c680085624d96eb99bfb2b0b27599 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006304 / 0.011353 (-0.005049) | 0.003884 / 0.011008 (-0.007125) | 0.084847 / 0.038508 (0.046339) | 0.069372 / 0.023109 (0.046263) | 0.318876 / 0.275898 (0.042978) | 0.344733 / 0.323480 (0.021253) | 0.005139 / 0.007986 (-0.002847) | 0.003203 / 0.004328 (-0.001125) | 0.065758 / 0.004250 (0.061507) | 0.054189 / 0.037052 (0.017137) | 0.317475 / 0.258489 (0.058986) | 0.359310 / 0.293841 (0.065469) | 0.030639 / 0.128546 (-0.097908) | 0.008657 / 0.075646 (-0.066989) | 0.289127 / 0.419271 (-0.130144) | 0.052344 / 0.043533 (0.008811) | 0.316122 / 0.255139 (0.060983) | 0.338339 / 0.283200 (0.055140) | 0.022677 / 0.141683 (-0.119006) | 1.551629 / 1.452155 (0.099474) | 1.617917 / 1.492716 (0.125201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231067 / 0.018006 (0.213061) | 0.450559 / 0.000490 (0.450070) | 0.008484 / 0.000200 (0.008284) | 0.000234 / 0.000054 (0.000179) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.081560 / 0.014526 (0.067034) | 0.094162 / 0.176557 (-0.082395) | 0.148583 / 0.737135 (-0.588552) | 0.093596 / 0.296338 (-0.202742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388616 / 0.215209 (0.173407) | 3.874905 / 2.077655 (1.797251) | 1.915845 / 1.504120 (0.411725) | 1.746410 / 1.541195 (0.205215) | 1.828789 / 1.468490 (0.360299) | 0.483270 / 4.584777 (-4.101506) | 3.489157 / 3.745712 (-0.256555) | 3.190086 / 5.269862 (-2.079776) | 1.978023 / 4.565676 (-2.587653) | 0.056290 / 0.424275 (-0.367985) | 0.007585 / 0.007607 (-0.000022) | 0.467051 / 0.226044 (0.241007) | 4.665971 / 2.268929 (2.397043) | 2.418550 / 55.444624 (-53.026075) | 2.048338 / 6.876477 (-4.828139) | 2.225275 / 2.142072 (0.083203) | 0.576601 / 4.805227 (-4.228626) | 0.131960 / 6.500664 (-6.368704) | 0.060177 / 0.075469 (-0.015292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249797 / 1.841788 (-0.591991) | 18.552939 / 8.074308 (10.478631) | 14.016616 / 10.191392 (3.825224) | 0.162869 / 0.680424 (-0.517555) | 0.018105 / 0.534201 (-0.516096) | 0.394838 / 0.579283 (-0.184445) | 0.403378 / 0.434364 (-0.030986) | 0.460931 / 0.540337 (-0.079407) | 0.637365 / 1.386936 (-0.749571) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006497 / 0.011353 (-0.004856) | 0.003928 / 0.011008 (-0.007080) | 0.063958 / 0.038508 (0.025450) | 0.069609 / 0.023109 (0.046500) | 0.401599 / 0.275898 (0.125701) | 0.428128 / 0.323480 (0.104648) | 0.005296 / 0.007986 (-0.002689) | 0.003332 / 0.004328 (-0.000996) | 0.063903 / 0.004250 (0.059652) | 0.056303 / 0.037052 (0.019250) | 0.400704 / 0.258489 (0.142214) | 0.435982 / 0.293841 (0.142141) | 0.032434 / 0.128546 (-0.096112) | 0.008570 / 0.075646 (-0.067077) | 0.070788 / 0.419271 (-0.348483) | 0.048252 / 0.043533 (0.004719) | 0.403269 / 0.255139 (0.148130) | 0.419796 / 0.283200 (0.136596) | 0.022598 / 0.141683 (-0.119085) | 1.481627 / 1.452155 (0.029472) | 1.578388 / 1.492716 (0.085672) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224552 / 0.018006 (0.206546) | 0.444059 / 0.000490 (0.443570) | 0.003757 / 0.000200 (0.003557) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032173 / 0.037411 (-0.005239) | 0.092562 / 0.014526 (0.078036) | 0.104972 / 0.176557 (-0.071584) | 0.156467 / 0.737135 (-0.580669) | 0.104274 / 0.296338 (-0.192065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441693 / 0.215209 (0.226484) | 4.400217 / 2.077655 (2.322562) | 2.393862 / 1.504120 (0.889742) | 2.281178 / 1.541195 (0.739983) | 2.339895 / 1.468490 (0.871405) | 0.488734 / 4.584777 (-4.096043) | 3.523352 / 3.745712 (-0.222360) | 3.216761 / 5.269862 (-2.053101) | 2.007553 / 4.565676 (-2.558123) | 0.058050 / 0.424275 (-0.366225) | 0.007566 / 0.007607 (-0.000041) | 0.515439 / 0.226044 (0.289394) | 5.155086 / 2.268929 (2.886157) | 2.864958 / 55.444624 (-52.579666) | 2.592460 / 6.876477 (-4.284016) | 2.800449 / 2.142072 (0.658376) | 0.588441 / 4.805227 (-4.216786) | 0.131589 / 6.500664 (-6.369075) | 0.059075 / 0.075469 (-0.016394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353889 / 1.841788 (-0.487898) | 18.938285 / 8.074308 (10.863977) | 14.937141 / 10.191392 (4.745749) | 0.168811 / 0.680424 (-0.511613) | 0.020118 / 0.534201 (-0.514083) | 0.394791 / 0.579283 (-0.184492) | 0.414434 / 0.434364 (-0.019930) | 0.466821 / 0.540337 (-0.073517) | 0.629894 / 1.386936 (-0.757042) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#23921b08390db7dbb3186a8de40dc49a4066da76 \"CML watermark\")\n", "CI failures are unrelated", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005959 / 0.011353 (-0.005394) | 0.004164 / 0.011008 (-0.006844) | 0.082336 / 0.038508 (0.043828) | 0.070344 / 0.023109 (0.047234) | 0.348032 / 0.275898 (0.072134) | 0.366328 / 0.323480 (0.042848) | 0.003882 / 0.007986 (-0.004104) | 0.003619 / 0.004328 (-0.000709) | 0.063343 / 0.004250 (0.059093) | 0.056617 / 0.037052 (0.019564) | 0.351625 / 0.258489 (0.093136) | 0.395839 / 0.293841 (0.101998) | 0.030842 / 0.128546 (-0.097704) | 0.008363 / 0.075646 (-0.067284) | 0.300535 / 0.419271 (-0.118737) | 0.053303 / 0.043533 (0.009770) | 0.354782 / 0.255139 (0.099643) | 0.364918 / 0.283200 (0.081719) | 0.025365 / 0.141683 (-0.116318) | 1.555009 / 1.452155 (0.102854) | 1.597443 / 1.492716 (0.104727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239808 / 0.018006 (0.221801) | 0.488164 / 0.000490 (0.487675) | 0.013183 / 0.000200 (0.012983) | 0.000483 / 0.000054 (0.000429) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027938 / 0.037411 (-0.009473) | 0.078521 / 0.014526 (0.063995) | 0.095498 / 0.176557 (-0.081059) | 0.150884 / 0.737135 (-0.586251) | 0.097577 / 0.296338 (-0.198762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384546 / 0.215209 (0.169337) | 4.037707 / 2.077655 (1.960053) | 1.940321 / 1.504120 (0.436201) | 1.716741 / 1.541195 (0.175546) | 1.837200 / 1.468490 (0.368710) | 0.502112 / 4.584777 (-4.082665) | 3.770452 / 3.745712 (0.024740) | 3.325691 / 5.269862 (-1.944171) | 2.015622 / 4.565676 (-2.550055) | 0.056246 / 0.424275 (-0.368029) | 0.007320 / 0.007607 (-0.000287) | 0.445553 / 0.226044 (0.219509) | 4.567233 / 2.268929 (2.298304) | 2.319531 / 55.444624 (-53.125093) | 1.968664 / 6.876477 (-4.907813) | 2.122349 / 2.142072 (-0.019724) | 0.573688 / 4.805227 (-4.231540) | 0.131410 / 6.500664 (-6.369254) | 0.062767 / 0.075469 (-0.012702) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255244 / 1.841788 (-0.586543) | 19.042480 / 8.074308 (10.968172) | 13.935342 / 10.191392 (3.743950) | 0.161259 / 0.680424 (-0.519165) | 0.020582 / 0.534201 (-0.513619) | 0.391365 / 0.579283 (-0.187918) | 0.417462 / 0.434364 (-0.016902) | 0.473121 / 0.540337 (-0.067216) | 0.674768 / 1.386936 (-0.712168) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003969 / 0.011008 (-0.007040) | 0.063558 / 0.038508 (0.025050) | 0.073847 / 0.023109 (0.050738) | 0.407064 / 0.275898 (0.131166) | 0.440695 / 0.323480 (0.117215) | 0.005783 / 0.007986 (-0.002203) | 0.003517 / 0.004328 (-0.000812) | 0.065721 / 0.004250 (0.061470) | 0.056390 / 0.037052 (0.019338) | 0.419019 / 0.258489 (0.160530) | 0.450721 / 0.293841 (0.156880) | 0.034094 / 0.128546 (-0.094452) | 0.008594 / 0.075646 (-0.067052) | 0.069254 / 0.419271 (-0.350017) | 0.049218 / 0.043533 (0.005685) | 0.413312 / 0.255139 (0.158173) | 0.439454 / 0.283200 (0.156255) | 0.021481 / 0.141683 (-0.120202) | 1.517536 / 1.452155 (0.065382) | 1.530532 / 1.492716 (0.037815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235392 / 0.018006 (0.217386) | 0.477371 / 0.000490 (0.476881) | 0.007070 / 0.000200 (0.006870) | 0.000132 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031909 / 0.037411 (-0.005502) | 0.092459 / 0.014526 (0.077933) | 0.105795 / 0.176557 (-0.070761) | 0.157745 / 0.737135 (-0.579390) | 0.104187 / 0.296338 (-0.192152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424385 / 0.215209 (0.209176) | 4.445371 / 2.077655 (2.367716) | 2.423639 / 1.504120 (0.919519) | 2.188167 / 1.541195 (0.646972) | 2.171023 / 1.468490 (0.702532) | 0.483566 / 4.584777 (-4.101211) | 3.825702 / 3.745712 (0.079990) | 3.276350 / 5.269862 (-1.993512) | 2.063075 / 4.565676 (-2.502602) | 0.061628 / 0.424275 (-0.362647) | 0.008176 / 0.007607 (0.000569) | 0.506697 / 0.226044 (0.280653) | 5.067924 / 2.268929 (2.798995) | 2.785567 / 55.444624 (-52.659057) | 2.457340 / 6.876477 (-4.419137) | 2.599646 / 2.142072 (0.457574) | 0.581550 / 4.805227 (-4.223677) | 0.131712 / 6.500664 (-6.368952) | 0.058776 / 0.075469 (-0.016693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356639 / 1.841788 (-0.485148) | 20.103463 / 8.074308 (12.029155) | 14.481010 / 10.191392 (4.289618) | 0.162870 / 0.680424 (-0.517554) | 0.023197 / 0.534201 (-0.511004) | 0.413042 / 0.579283 (-0.166241) | 0.427494 / 0.434364 (-0.006870) | 0.508457 / 0.540337 (-0.031880) | 0.662412 / 1.386936 (-0.724524) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05fe5c06d42f84408b933c2809acb9b7449cbbb3 \"CML watermark\")\n" ]
"2023-09-15T14:23:33Z"
"2023-09-19T18:02:21Z"
"2023-09-19T17:53:17Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6243.diff", "html_url": "https://github.com/huggingface/datasets/pull/6243", "merged_at": "2023-09-19T17:53:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6243.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6243" }
Fix #6242
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6243/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5763/comments
https://api.github.com/repos/huggingface/datasets/issues/5763/events
https://github.com/huggingface/datasets/pull/5763
1,670,476,302
PR_kwDODunzps5OcMI7
5,763
fix typo: "mow" -> "now"
{ "avatar_url": "https://avatars.githubusercontent.com/u/1967608?v=4", "events_url": "https://api.github.com/users/csris/events{/privacy}", "followers_url": "https://api.github.com/users/csris/followers", "following_url": "https://api.github.com/users/csris/following{/other_user}", "gists_url": "https://api.github.com/users/csris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/csris", "id": 1967608, "login": "csris", "node_id": "MDQ6VXNlcjE5Njc2MDg=", "organizations_url": "https://api.github.com/users/csris/orgs", "received_events_url": "https://api.github.com/users/csris/received_events", "repos_url": "https://api.github.com/users/csris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/csris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/csris/subscriptions", "type": "User", "url": "https://api.github.com/users/csris" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.004984 / 0.011008 (-0.006024) | 0.096781 / 0.038508 (0.058273) | 0.033049 / 0.023109 (0.009939) | 0.297681 / 0.275898 (0.021783) | 0.329553 / 0.323480 (0.006073) | 0.005697 / 0.007986 (-0.002289) | 0.004019 / 0.004328 (-0.000310) | 0.072691 / 0.004250 (0.068441) | 0.046921 / 0.037052 (0.009868) | 0.311467 / 0.258489 (0.052978) | 0.337616 / 0.293841 (0.043775) | 0.042400 / 0.128546 (-0.086146) | 0.011919 / 0.075646 (-0.063727) | 0.331390 / 0.419271 (-0.087881) | 0.051004 / 0.043533 (0.007471) | 0.295317 / 0.255139 (0.040178) | 0.316570 / 0.283200 (0.033371) | 0.099283 / 0.141683 (-0.042400) | 1.430583 / 1.452155 (-0.021572) | 1.493550 / 1.492716 (0.000834) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213634 / 0.018006 (0.195628) | 0.432557 / 0.000490 (0.432067) | 0.001586 / 0.000200 (0.001386) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025249 / 0.037411 (-0.012162) | 0.105433 / 0.014526 (0.090908) | 0.113474 / 0.176557 (-0.063082) | 0.168799 / 0.737135 (-0.568336) | 0.119363 / 0.296338 (-0.176975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412450 / 0.215209 (0.197241) | 4.117432 / 2.077655 (2.039777) | 1.935176 / 1.504120 (0.431056) | 1.745674 / 1.541195 (0.204479) | 1.853872 / 1.468490 (0.385382) | 0.703429 / 4.584777 (-3.881348) | 3.756981 / 3.745712 (0.011269) | 3.730607 / 5.269862 (-1.539255) | 1.839052 / 4.565676 (-2.726624) | 0.087574 / 0.424275 (-0.336701) | 0.012293 / 0.007607 (0.004686) | 0.517234 / 0.226044 (0.291190) | 5.189759 / 2.268929 (2.920831) | 2.418739 / 55.444624 (-53.025885) | 2.081424 / 6.876477 (-4.795053) | 2.204464 / 2.142072 (0.062392) | 0.842768 / 4.805227 (-3.962459) | 0.169014 / 6.500664 (-6.331650) | 0.063711 / 0.075469 (-0.011758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180636 / 1.841788 (-0.661152) | 14.816088 / 8.074308 (6.741779) | 14.290085 / 10.191392 (4.098693) | 0.165267 / 0.680424 (-0.515156) | 0.017290 / 0.534201 (-0.516911) | 0.419678 / 0.579283 (-0.159605) | 0.418164 / 0.434364 (-0.016200) | 0.492210 / 0.540337 (-0.048127) | 0.588528 / 1.386936 (-0.798408) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.005223 / 0.011008 (-0.005785) | 0.073583 / 0.038508 (0.035075) | 0.033534 / 0.023109 (0.010425) | 0.339020 / 0.275898 (0.063122) | 0.366546 / 0.323480 (0.043066) | 0.006245 / 0.007986 (-0.001741) | 0.004081 / 0.004328 (-0.000247) | 0.073089 / 0.004250 (0.068839) | 0.047024 / 0.037052 (0.009971) | 0.342540 / 0.258489 (0.084051) | 0.379743 / 0.293841 (0.085902) | 0.037551 / 0.128546 (-0.090995) | 0.012246 / 0.075646 (-0.063400) | 0.084796 / 0.419271 (-0.334476) | 0.052256 / 0.043533 (0.008723) | 0.342675 / 0.255139 (0.087536) | 0.367157 / 0.283200 (0.083957) | 0.102939 / 0.141683 (-0.038744) | 1.409039 / 1.452155 (-0.043115) | 1.526137 / 1.492716 (0.033420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208143 / 0.018006 (0.190136) | 0.437940 / 0.000490 (0.437450) | 0.000424 / 0.000200 (0.000224) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028321 / 0.037411 (-0.009091) | 0.110417 / 0.014526 (0.095891) | 0.119449 / 0.176557 (-0.057107) | 0.168081 / 0.737135 (-0.569054) | 0.126658 / 0.296338 (-0.169681) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429302 / 0.215209 (0.214093) | 4.270547 / 2.077655 (2.192892) | 2.061323 / 1.504120 (0.557203) | 1.857877 / 1.541195 (0.316682) | 1.873317 / 1.468490 (0.404827) | 0.688750 / 4.584777 (-3.896027) | 3.767951 / 3.745712 (0.022239) | 2.011436 / 5.269862 (-3.258426) | 1.299965 / 4.565676 (-3.265712) | 0.084799 / 0.424275 (-0.339476) | 0.012082 / 0.007607 (0.004475) | 0.521981 / 0.226044 (0.295937) | 5.265333 / 2.268929 (2.996405) | 2.494326 / 55.444624 (-52.950298) | 2.144672 / 6.876477 (-4.731804) | 2.365624 / 2.142072 (0.223551) | 0.839868 / 4.805227 (-3.965359) | 0.166614 / 6.500664 (-6.334050) | 0.063804 / 0.075469 (-0.011665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264623 / 1.841788 (-0.577164) | 14.946515 / 8.074308 (6.872207) | 14.450115 / 10.191392 (4.258723) | 0.163878 / 0.680424 (-0.516546) | 0.017501 / 0.534201 (-0.516700) | 0.420992 / 0.579283 (-0.158291) | 0.423005 / 0.434364 (-0.011359) | 0.489505 / 0.540337 (-0.050832) | 0.594631 / 1.386936 (-0.792305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fd893098627230cc734f6009ad04cf885c979ac4 \"CML watermark\")\n" ]
"2023-04-17T06:03:44Z"
"2023-04-17T15:01:53Z"
"2023-04-17T14:54:46Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5763.diff", "html_url": "https://github.com/huggingface/datasets/pull/5763", "merged_at": "2023-04-17T14:54:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/5763.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5763" }
I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now."
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5763/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5763/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5518/comments
https://api.github.com/repos/huggingface/datasets/issues/5518/events
https://github.com/huggingface/datasets/pull/5518
1,578,203,962
PR_kwDODunzps5Joom3
5,518
Remove py.typed
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008283 / 0.011353 (-0.003070) | 0.004450 / 0.011008 (-0.006558) | 0.099773 / 0.038508 (0.061265) | 0.029068 / 0.023109 (0.005959) | 0.296799 / 0.275898 (0.020901) | 0.350946 / 0.323480 (0.027466) | 0.007331 / 0.007986 (-0.000655) | 0.004550 / 0.004328 (0.000222) | 0.077603 / 0.004250 (0.073352) | 0.034307 / 0.037052 (-0.002746) | 0.313174 / 0.258489 (0.054685) | 0.342270 / 0.293841 (0.048429) | 0.033463 / 0.128546 (-0.095083) | 0.011421 / 0.075646 (-0.064225) | 0.317188 / 0.419271 (-0.102083) | 0.040985 / 0.043533 (-0.002548) | 0.300800 / 0.255139 (0.045661) | 0.360171 / 0.283200 (0.076972) | 0.086702 / 0.141683 (-0.054981) | 1.474679 / 1.452155 (0.022525) | 1.518319 / 1.492716 (0.025603) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198059 / 0.018006 (0.180052) | 0.403502 / 0.000490 (0.403012) | 0.002663 / 0.000200 (0.002463) | 0.000218 / 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022946 / 0.037411 (-0.014465) | 0.096466 / 0.014526 (0.081940) | 0.104092 / 0.176557 (-0.072465) | 0.138499 / 0.737135 (-0.598636) | 0.106941 / 0.296338 (-0.189397) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416000 / 0.215209 (0.200791) | 4.153120 / 2.077655 (2.075465) | 1.843957 / 1.504120 (0.339837) | 1.650391 / 1.541195 (0.109197) | 1.684765 / 1.468490 (0.216275) | 0.688917 / 4.584777 (-3.895860) | 3.442797 / 3.745712 (-0.302916) | 1.834685 / 5.269862 (-3.435176) | 1.148046 / 4.565676 (-3.417631) | 0.082299 / 0.424275 (-0.341976) | 0.012399 / 0.007607 (0.004792) | 0.521099 / 0.226044 (0.295054) | 5.223695 / 2.268929 (2.954767) | 2.270970 / 55.444624 (-53.173654) | 1.921321 / 6.876477 (-4.955156) | 1.954675 / 2.142072 (-0.187398) | 0.809383 / 4.805227 (-3.995845) | 0.148562 / 6.500664 (-6.352102) | 0.064764 / 0.075469 (-0.010705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212687 / 1.841788 (-0.629101) | 13.491641 / 8.074308 (5.417333) | 12.972926 / 10.191392 (2.781534) | 0.137036 / 0.680424 (-0.543388) | 0.028591 / 0.534201 (-0.505610) | 0.391980 / 0.579283 (-0.187303) | 0.394474 / 0.434364 (-0.039889) | 0.456582 / 0.540337 (-0.083755) | 0.535984 / 1.386936 (-0.850952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004295 / 0.011008 (-0.006713) | 0.077702 / 0.038508 (0.039194) | 0.027368 / 0.023109 (0.004259) | 0.336713 / 0.275898 (0.060815) | 0.370074 / 0.323480 (0.046594) | 0.004657 / 0.007986 (-0.003328) | 0.003308 / 0.004328 (-0.001021) | 0.075747 / 0.004250 (0.071496) | 0.037323 / 0.037052 (0.000271) | 0.342382 / 0.258489 (0.083893) | 0.381109 / 0.293841 (0.087269) | 0.031804 / 0.128546 (-0.096742) | 0.011761 / 0.075646 (-0.063885) | 0.086818 / 0.419271 (-0.332454) | 0.042058 / 0.043533 (-0.001475) | 0.346295 / 0.255139 (0.091156) | 0.366857 / 0.283200 (0.083658) | 0.088666 / 0.141683 (-0.053016) | 1.533711 / 1.452155 (0.081556) | 1.537422 / 1.492716 (0.044705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220416 / 0.018006 (0.202410) | 0.387393 / 0.000490 (0.386903) | 0.003739 / 0.000200 (0.003539) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024083 / 0.037411 (-0.013329) | 0.098036 / 0.014526 (0.083510) | 0.102908 / 0.176557 (-0.073648) | 0.139512 / 0.737135 (-0.597623) | 0.107703 / 0.296338 (-0.188635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437615 / 0.215209 (0.222406) | 4.373140 / 2.077655 (2.295486) | 2.065063 / 1.504120 (0.560943) | 1.863938 / 1.541195 (0.322743) | 1.907955 / 1.468490 (0.439465) | 0.695830 / 4.584777 (-3.888947) | 3.394248 / 3.745712 (-0.351464) | 1.842794 / 5.269862 (-3.427068) | 1.156928 / 4.565676 (-3.408748) | 0.082505 / 0.424275 (-0.341771) | 0.012405 / 0.007607 (0.004798) | 0.538041 / 0.226044 (0.311997) | 5.363508 / 2.268929 (3.094579) | 2.509383 / 55.444624 (-52.935241) | 2.160416 / 6.876477 (-4.716061) | 2.162054 / 2.142072 (0.019982) | 0.802419 / 4.805227 (-4.002809) | 0.150529 / 6.500664 (-6.350135) | 0.066418 / 0.075469 (-0.009051) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257221 / 1.841788 (-0.584567) | 13.748839 / 8.074308 (5.674531) | 13.310555 / 10.191392 (3.119163) | 0.152997 / 0.680424 (-0.527427) | 0.016618 / 0.534201 (-0.517583) | 0.375443 / 0.579283 (-0.203840) | 0.374942 / 0.434364 (-0.059422) | 0.466704 / 0.540337 (-0.073633) | 0.553563 / 1.386936 (-0.833373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ac8343af4e2dc6fe0771d0be70eaf8a6e5a8fbc \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009260 / 0.011353 (-0.002092) | 0.005213 / 0.011008 (-0.005795) | 0.102151 / 0.038508 (0.063643) | 0.035619 / 0.023109 (0.012510) | 0.296266 / 0.275898 (0.020368) | 0.359884 / 0.323480 (0.036404) | 0.008176 / 0.007986 (0.000190) | 0.005031 / 0.004328 (0.000703) | 0.077178 / 0.004250 (0.072927) | 0.041898 / 0.037052 (0.004846) | 0.305640 / 0.258489 (0.047151) | 0.346275 / 0.293841 (0.052434) | 0.037684 / 0.128546 (-0.090863) | 0.011816 / 0.075646 (-0.063831) | 0.334853 / 0.419271 (-0.084419) | 0.046535 / 0.043533 (0.003002) | 0.291544 / 0.255139 (0.036405) | 0.317194 / 0.283200 (0.033994) | 0.103212 / 0.141683 (-0.038471) | 1.424994 / 1.452155 (-0.027161) | 1.486216 / 1.492716 (-0.006501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011816 / 0.018006 (-0.006190) | 0.442092 / 0.000490 (0.441602) | 0.001297 / 0.000200 (0.001097) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028277 / 0.037411 (-0.009134) | 0.110431 / 0.014526 (0.095905) | 0.118456 / 0.176557 (-0.058100) | 0.156778 / 0.737135 (-0.580357) | 0.123036 / 0.296338 (-0.173302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399006 / 0.215209 (0.183797) | 3.990367 / 2.077655 (1.912712) | 1.798739 / 1.504120 (0.294620) | 1.607133 / 1.541195 (0.065938) | 1.748897 / 1.468490 (0.280407) | 0.690666 / 4.584777 (-3.894111) | 3.795892 / 3.745712 (0.050180) | 3.479317 / 5.269862 (-1.790545) | 1.861268 / 4.565676 (-2.704409) | 0.085235 / 0.424275 (-0.339040) | 0.012997 / 0.007607 (0.005390) | 0.512489 / 0.226044 (0.286445) | 5.039515 / 2.268929 (2.770587) | 2.258079 / 55.444624 (-53.186545) | 1.907178 / 6.876477 (-4.969299) | 1.985953 / 2.142072 (-0.156119) | 0.843595 / 4.805227 (-3.961633) | 0.165286 / 6.500664 (-6.335378) | 0.063026 / 0.075469 (-0.012443) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.186680 / 1.841788 (-0.655108) | 14.976016 / 8.074308 (6.901708) | 14.436941 / 10.191392 (4.245549) | 0.172620 / 0.680424 (-0.507804) | 0.028760 / 0.534201 (-0.505441) | 0.443505 / 0.579283 (-0.135778) | 0.435665 / 0.434364 (0.001301) | 0.520164 / 0.540337 (-0.020174) | 0.608348 / 1.386936 (-0.778588) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007510 / 0.011353 (-0.003842) | 0.005012 / 0.011008 (-0.005996) | 0.077865 / 0.038508 (0.039357) | 0.033610 / 0.023109 (0.010500) | 0.365996 / 0.275898 (0.090098) | 0.416393 / 0.323480 (0.092913) | 0.005672 / 0.007986 (-0.002314) | 0.005334 / 0.004328 (0.001006) | 0.074948 / 0.004250 (0.070698) | 0.045962 / 0.037052 (0.008909) | 0.362209 / 0.258489 (0.103719) | 0.410522 / 0.293841 (0.116681) | 0.036247 / 0.128546 (-0.092299) | 0.012432 / 0.075646 (-0.063214) | 0.088754 / 0.419271 (-0.330517) | 0.048848 / 0.043533 (0.005315) | 0.370994 / 0.255139 (0.115855) | 0.382476 / 0.283200 (0.099277) | 0.103443 / 0.141683 (-0.038240) | 1.483127 / 1.452155 (0.030972) | 1.573366 / 1.492716 (0.080650) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224163 / 0.018006 (0.206157) | 0.475136 / 0.000490 (0.474646) | 0.000394 / 0.000200 (0.000194) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030612 / 0.037411 (-0.006799) | 0.113983 / 0.014526 (0.099457) | 0.121835 / 0.176557 (-0.054722) | 0.160092 / 0.737135 (-0.577043) | 0.127431 / 0.296338 (-0.168908) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421389 / 0.215209 (0.206179) | 4.207638 / 2.077655 (2.129984) | 2.040265 / 1.504120 (0.536145) | 1.868617 / 1.541195 (0.327422) | 1.979016 / 1.468490 (0.510526) | 0.712499 / 4.584777 (-3.872278) | 3.783091 / 3.745712 (0.037379) | 2.124293 / 5.269862 (-3.145569) | 1.382028 / 4.565676 (-3.183649) | 0.087133 / 0.424275 (-0.337142) | 0.012634 / 0.007607 (0.005027) | 0.518965 / 0.226044 (0.292920) | 5.188330 / 2.268929 (2.919401) | 2.556593 / 55.444624 (-52.888031) | 2.243081 / 6.876477 (-4.633396) | 2.340420 / 2.142072 (0.198347) | 0.858010 / 4.805227 (-3.947218) | 0.169165 / 6.500664 (-6.331499) | 0.065177 / 0.075469 (-0.010292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297350 / 1.841788 (-0.544438) | 15.404241 / 8.074308 (7.329933) | 13.806039 / 10.191392 (3.614647) | 0.182055 / 0.680424 (-0.498369) | 0.017789 / 0.534201 (-0.516412) | 0.422828 / 0.579283 (-0.156455) | 0.418269 / 0.434364 (-0.016095) | 0.521561 / 0.540337 (-0.018777) | 0.642526 / 1.386936 (-0.744410) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0009eea6819c32a888f65b0fdb5889b6d311c436 \"CML watermark\")\n" ]
"2023-02-09T16:22:29Z"
"2023-02-13T13:55:49Z"
"2023-02-13T13:48:40Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5518.diff", "html_url": "https://github.com/huggingface/datasets/pull/5518", "merged_at": "2023-02-13T13:48:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5518" }
Fix https://github.com/huggingface/datasets/issues/3841
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5518/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1855/comments
https://api.github.com/repos/huggingface/datasets/issues/1855/events
https://github.com/huggingface/datasets/pull/1855
805,256,579
MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3
1,855
Minor fix in the docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-02-10T07:27:43Z"
"2021-02-10T12:33:09Z"
"2021-02-10T12:33:09Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1855.diff", "html_url": "https://github.com/huggingface/datasets/pull/1855", "merged_at": "2021-02-10T12:33:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/1855.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1855" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1855/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2468/comments
https://api.github.com/repos/huggingface/datasets/issues/2468/events
https://github.com/huggingface/datasets/pull/2468
916,427,320
MDExOlB1bGxSZXF1ZXN0NjY2MDk0ODI5
2,468
Implement ClassLabel encoding in JSON loader
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-09T05:50:07Z", "closed_issues": 12, "created_at": "2021-05-31T16:13:06Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/5", "id": 6808903, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "open_issues": 0, "state": "closed", "title": "1.9", "updated_at": "2021-07-12T14:12:00Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/5" }
[ "No, nevermind @lhoestq. Thanks to you for your reviews!" ]
"2021-06-09T17:08:54Z"
"2021-06-28T15:39:54Z"
"2021-06-28T15:05:35Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2468.diff", "html_url": "https://github.com/huggingface/datasets/pull/2468", "merged_at": "2021-06-28T15:05:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2468.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2468" }
Close #2365.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2468/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2468/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2609/comments
https://api.github.com/repos/huggingface/datasets/issues/2609/events
https://github.com/huggingface/datasets/pull/2609
939,616,682
MDExOlB1bGxSZXF1ZXN0Njg1ODA3MTMz
2,609
Fix potential DuplicatedKeysError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
[ "Finally, I'm splitting this PR." ]
"2021-07-08T08:38:04Z"
"2021-07-12T14:13:16Z"
"2021-07-09T16:42:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2609.diff", "html_url": "https://github.com/huggingface/datasets/pull/2609", "merged_at": "2021-07-09T16:42:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2609.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2609" }
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2609/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2609/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5000/comments
https://api.github.com/repos/huggingface/datasets/issues/5000/events
https://github.com/huggingface/datasets/issues/5000
1,379,709,398
I_kwDODunzps5SPLHW
5,000
Dataset Viewer issue for asapp/slue
{ "avatar_url": "https://avatars.githubusercontent.com/u/56092571?v=4", "events_url": "https://api.github.com/users/fwu-asapp/events{/privacy}", "followers_url": "https://api.github.com/users/fwu-asapp/followers", "following_url": "https://api.github.com/users/fwu-asapp/following{/other_user}", "gists_url": "https://api.github.com/users/fwu-asapp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fwu-asapp", "id": 56092571, "login": "fwu-asapp", "node_id": "MDQ6VXNlcjU2MDkyNTcx", "organizations_url": "https://api.github.com/users/fwu-asapp/orgs", "received_events_url": "https://api.github.com/users/fwu-asapp/received_events", "repos_url": "https://api.github.com/users/fwu-asapp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fwu-asapp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fwu-asapp/subscriptions", "type": "User", "url": "https://api.github.com/users/fwu-asapp" }
[]
closed
false
null
[]
null
[ "<img width=\"519\" alt=\"Capture d’écran 2022-09-20 à 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=asapp/slue\r\n\r\n```json\r\n{\"error\":\"Not found.\"}\r\n```", "I just launched a refresh. It's weird, I don't see any entry for this dataset in the cache, it's a bug on our side. In order to try to understand what happened, did you change the visibility status from private to public, by any chance?", "The dataset is being refreshed, please retry later.\r\n\r\n<img width=\"802\" alt=\"Capture d’écran 2022-09-20 à 22 39 46\" src=\"https://user-images.githubusercontent.com/1676121/191360072-7cc86486-4e84-4b47-8f9a-4a69fe84a5ac.png\">\r\n", "OK. We now have an issue because the dataset cannot be streamed, and the dataset viewer relies on it.\r\n\r\nMaybe @huggingface/datasets can help:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 337, in get_first_rows_response\r\n rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)\r\n File \"/src/services/worker/src/worker/utils.py\", line 123, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 65, in get_rows\r\n ds = load_dataset(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1739, in load_dataset\r\n return builder_instance.as_streaming_dataset(split=split)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1025, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/tmp/modules-cache/datasets_modules/datasets/asapp--slue/adaa0c78233e1a1df9c2f054e690ec5fc3eaf453bd76b80fe5cbe5728e55d9b1/slue.py\", line 189, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DL_URLS[config_name])\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 390, in _get_extraction_protocol\r\n raise NotImplementedError(\r\n NotImplementedError: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```", "Thanks @severo, \r\n\r\nDo I have to modify the python script to support streaming so that it can be previewed?\r\nIs there a document somewhere that I can follow?\r\n", "Hi @fwu-asapp thanks for reporting, and thanks @severo for the investigation.\r\n\r\nAs explained by @severo, the preview requires that your dataset loading script supports streaming.\r\n\r\nThere are several options here:\r\n- the easiest would be to replace the source files, archived using ZIP instead TAR: the TAR format does not allow random access while streaming, but only sequential access; the ZIP files support streaming out of the box.\r\n- alternatively, to stream TAR archives you can use `dl_manager.iter_archive`: the only prerequisite is that your \"index\" files (.tsv) should have been archived before their corresponding audio files, so while iterating the content of the TAR archive, the metadata files appear first. I think this is the case for voxpopuli tar but not for voxceleb.\r\n- if your .tsv files were not archived before their corresponding audio files (I think this is the case for voxceleb), then you should extract the .tsv files and host them separately (you can host them on the same Hugging Face Hub).\r\n - you can take as example, e.g.: https://huggingface.co/datasets/vivos/blob/main/vivos.py\r\n\r\nAs an advanced approach, you can handle both streaming and non-streaming cases separately.\r\n- as for example: https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py or https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py\r\n\r\nSee related discussion:\r\n- https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492", "Thanks @albertvillanova for your clarification. I'll talk to my collaborators to see if we can replace those files. Let me just close this issue for now.", "FYI, after replacing the source files with the ZIP ones, the dataset viewer works well. Thanks again to @severo and @albertvillanova for your help!", "Great! And thank you for sharing that interesting dataset!" ]
"2022-09-20T16:45:45Z"
"2022-09-27T07:04:03Z"
"2022-09-21T07:24:07Z"
NONE
null
null
null
### Link https://huggingface.co/datasets/asapp/slue/viewer/ ### Description Hi, I wonder how to get the dataset viewer of our slue dataset to work. Best, Felix ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5000/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5000/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3203/comments
https://api.github.com/repos/huggingface/datasets/issues/3203/events
https://github.com/huggingface/datasets/pull/3203
1,043,552,766
PR_kwDODunzps4uCNoT
3,203
Updated: DaNE - updated URL for download
{ "avatar_url": "https://avatars.githubusercontent.com/u/47593213?v=4", "events_url": "https://api.github.com/users/MalteHB/events{/privacy}", "followers_url": "https://api.github.com/users/MalteHB/followers", "following_url": "https://api.github.com/users/MalteHB/following{/other_user}", "gists_url": "https://api.github.com/users/MalteHB/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MalteHB", "id": 47593213, "login": "MalteHB", "node_id": "MDQ6VXNlcjQ3NTkzMjEz", "organizations_url": "https://api.github.com/users/MalteHB/orgs", "received_events_url": "https://api.github.com/users/MalteHB/received_events", "repos_url": "https://api.github.com/users/MalteHB/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MalteHB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MalteHB/subscriptions", "type": "User", "url": "https://api.github.com/users/MalteHB" }
[]
closed
false
null
[]
null
[ "Actually it looks like the old URL is still working, and it's also the one that is mentioned in https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md\r\n\r\nWhat makes you think we should use the new URL ?", "@lhoestq Sorry! I might have jumped to conclusions a bit too fast here... \r\n\r\nI was working in Google Colab and got an error that it was unable to use the URL. I then forked the project, updated the URL, ran it locally and it worked. I therefore assumed that my URL update fixed the issue, however, I see now that it might rather be a Google Colab issue... \r\n\r\nStill - this seems to be the official URL for downloading the dataset, and I think that it will be most beneficial to use. :-) ", "It looks like they're using these new urls for their new datasets. Maybe let's change to the new URL in case the old one stops working at one point. Thanks" ]
"2021-11-03T12:55:13Z"
"2021-11-04T13:14:36Z"
"2021-11-04T11:46:43Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3203.diff", "html_url": "https://github.com/huggingface/datasets/pull/3203", "merged_at": "2021-11-04T11:46:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3203.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3203" }
It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3203/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3203/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2015/comments
https://api.github.com/repos/huggingface/datasets/issues/2015/events
https://github.com/huggingface/datasets/pull/2015
825,942,108
MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0
2,015
Fix ipython function creation in tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-03-09T13:36:59Z"
"2021-03-09T14:06:04Z"
"2021-03-09T14:06:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2015.diff", "html_url": "https://github.com/huggingface/datasets/pull/2015", "merged_at": "2021-03-09T14:06:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/2015.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2015" }
The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created. Fix #2010
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2015/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/583/comments
https://api.github.com/repos/huggingface/datasets/issues/583/events
https://github.com/huggingface/datasets/issues/583
695,166,265
MDU6SXNzdWU2OTUxNjYyNjU=
583
ArrowIndexError on Dataset.select
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-09-07T14:36:29Z"
"2020-09-08T07:43:15Z"
"2020-09-08T07:43:15Z"
MEMBER
null
null
null
If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0 Example: ```python from nlp import load_dataset mnli = load_dataset("glue", "mnli", split="train") shuffled = mnli.shuffle(seed=42) mnli.select(list(range(len(mnli)))) ``` raises: ```python --------------------------------------------------------------------------- ArrowIndexError Traceback (most recent call last) <ipython-input-64-006a5d38d418> in <module> ----> 1 mnli.shuffle(seed=42).select(list(range(len(mnli)))) ~/Desktop/hf/nlp/src/nlp/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms ~/Desktop/hf/nlp/src/nlp/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 1653 if self._indices is not None: 1654 if PYARROW_V0: -> 1655 indices_array = self._indices.column(0).chunk(0).take(indices_array) 1656 else: 1657 indices_array = self._indices.column(0).take(indices_array) ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.Array.take() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowIndexError: take index out of bounds ``` This is because the `take` method is only done on the first chunk which only contains 1000 elements by default (mnli has ~400 000 elements). Shall we change that to use ```python pa.concat_tables(self._indices._indices.slice(i, 1) for i in indices_array) ``` instead of `take` ? @thomwolf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/583/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/583/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/906/comments
https://api.github.com/repos/huggingface/datasets/issues/906/events
https://github.com/huggingface/datasets/pull/906
752,403,395
MDExOlB1bGxSZXF1ZXN0NTI4NzM0MDY0
906
Fix url with backslash in windows for blimp and pg19
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-11-27T17:59:11Z"
"2020-11-27T18:19:56Z"
"2020-11-27T18:19:56Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/906.diff", "html_url": "https://github.com/huggingface/datasets/pull/906", "merged_at": "2020-11-27T18:19:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/906.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/906" }
Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls cc @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/906/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/906/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2717/comments
https://api.github.com/repos/huggingface/datasets/issues/2717/events
https://github.com/huggingface/datasets/pull/2717
952,979,976
MDExOlB1bGxSZXF1ZXN0Njk3MDkzNDEx
2,717
Fix shuffle on IterableDataset that disables batching in case any functions were mapped
{ "avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4", "events_url": "https://api.github.com/users/amankhandelia/events{/privacy}", "followers_url": "https://api.github.com/users/amankhandelia/followers", "following_url": "https://api.github.com/users/amankhandelia/following{/other_user}", "gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amankhandelia", "id": 7098967, "login": "amankhandelia", "node_id": "MDQ6VXNlcjcwOTg5Njc=", "organizations_url": "https://api.github.com/users/amankhandelia/orgs", "received_events_url": "https://api.github.com/users/amankhandelia/received_events", "repos_url": "https://api.github.com/users/amankhandelia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions", "type": "User", "url": "https://api.github.com/users/amankhandelia" }
[]
closed
false
null
[]
null
[]
"2021-07-26T14:42:22Z"
"2021-07-26T18:04:14Z"
"2021-07-26T16:30:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2717.diff", "html_url": "https://github.com/huggingface/datasets/pull/2717", "merged_at": "2021-07-26T16:30:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2717.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2717" }
Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call. As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable` Fix #2716.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2717/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2717/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6452/comments
https://api.github.com/repos/huggingface/datasets/issues/6452/events
https://github.com/huggingface/datasets/pull/6452
2,011,632,708
PR_kwDODunzps5gZ5oe
6,452
Praveen_repo_pull_req
{ "avatar_url": "https://avatars.githubusercontent.com/u/151713216?v=4", "events_url": "https://api.github.com/users/Praveenhh/events{/privacy}", "followers_url": "https://api.github.com/users/Praveenhh/followers", "following_url": "https://api.github.com/users/Praveenhh/following{/other_user}", "gists_url": "https://api.github.com/users/Praveenhh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Praveenhh", "id": 151713216, "login": "Praveenhh", "node_id": "U_kgDOCQr1wA", "organizations_url": "https://api.github.com/users/Praveenhh/orgs", "received_events_url": "https://api.github.com/users/Praveenhh/received_events", "repos_url": "https://api.github.com/users/Praveenhh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Praveenhh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Praveenhh/subscriptions", "type": "User", "url": "https://api.github.com/users/Praveenhh" }
[]
closed
false
null
[]
null
[]
"2023-11-27T07:07:50Z"
"2023-11-27T09:28:00Z"
"2023-11-27T09:28:00Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6452.diff", "html_url": "https://github.com/huggingface/datasets/pull/6452", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6452.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6452" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6452/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6452/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5174/comments
https://api.github.com/repos/huggingface/datasets/issues/5174/events
https://github.com/huggingface/datasets/pull/5174
1,427,216,416
PR_kwDODunzps5Bv3rh
5,174
Preserve None in list type cast in PyArrow 10
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-10-28T12:48:30Z"
"2022-10-28T13:15:33Z"
"2022-10-28T13:13:18Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5174.diff", "html_url": "https://github.com/huggingface/datasets/pull/5174", "merged_at": "2022-10-28T13:13:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/5174.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5174" }
The `ListArray` type in PyArrow 10.0.0 supports the `mask` parameter, which allows us to preserve Nones in nested lists in `cast` instead of replacing them with empty lists. Fix https://github.com/huggingface/datasets/issues/3676
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5174/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5174/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/494/comments
https://api.github.com/repos/huggingface/datasets/issues/494/events
https://github.com/huggingface/datasets/pull/494
676,886,955
MDExOlB1bGxSZXF1ZXN0NDY2MTExOTQz
494
Fix numpy stacking
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key." ]
"2020-08-11T13:40:30Z"
"2020-08-11T14:56:50Z"
"2020-08-11T13:49:52Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/494.diff", "html_url": "https://github.com/huggingface/datasets/pull/494", "merged_at": "2020-08-11T13:49:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/494.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/494" }
When getting items using a column name as a key, numpy arrays were not stacked. I fixed that and added some tests. There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/494/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/494/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2653/comments
https://api.github.com/repos/huggingface/datasets/issues/2653/events
https://github.com/huggingface/datasets/issues/2653
945,102,321
MDU6SXNzdWU5NDUxMDIzMjE=
2,653
Add SD task for SUPERB
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
{ "closed_at": "2021-09-02T05:34:03Z", "closed_issues": 2, "created_at": "2021-07-09T05:49:00Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/7", "id": 6931350, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/7/labels", "node_id": "MDk6TWlsZXN0b25lNjkzMTM1MA==", "number": 7, "open_issues": 0, "state": "closed", "title": "1.11", "updated_at": "2021-09-02T05:34:03Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/7" }
[ "Note that this subset requires us to:\r\n\r\n* generate the LibriMix corpus from LibriSpeech\r\n* prepare the corpus for diarization\r\n\r\nAs suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https://huggingface.co/datasets/superb/superb-data\r\n\r\nThen we can use the URLs for the files to load the data in `superb`'s dataset loading script.\r\n\r\nFor consistency, I suggest we name the folders in `superb-data` in the same way as the configs in the dataset loading script - e.g. use `sd` for speech diarization in both places :)", "@lewtun @lhoestq: \r\n\r\nI have already generated the LibriMix corpus and prepared the corpus for diarization. The output is 3 dirs (train, dev, test), each one containing 6 files: reco2dur rttm segments spk2utt utt2spk wav.scp\r\n\r\nNext steps:\r\n- Upload these files to the superb-data repo\r\n- Transcribe the corresponding s3prl processing of these files into our superb loading script\r\n\r\nNote that processing of these files is a bit more intricate than usual datasets: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/diarization/dataset.py#L233\r\n\r\n" ]
"2021-07-15T07:51:40Z"
"2021-08-04T17:03:52Z"
"2021-08-04T17:03:52Z"
MEMBER
null
null
null
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [ ] README: tags + description sections Related to #2619. cc: @lewtun
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2653/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5748/comments
https://api.github.com/repos/huggingface/datasets/issues/5748/events
https://github.com/huggingface/datasets/pull/5748
1,667,517,024
PR_kwDODunzps5OSgNH
5,748
[BUG FIX] Issue 5739
{ "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ericxsun", "id": 1772912, "login": "ericxsun", "node_id": "MDQ6VXNlcjE3NzI5MTI=", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "repos_url": "https://api.github.com/users/ericxsun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "type": "User", "url": "https://api.github.com/users/ericxsun" }
[]
open
false
null
[]
null
[]
"2023-04-14T05:07:31Z"
"2023-04-14T05:07:31Z"
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5748.diff", "html_url": "https://github.com/huggingface/datasets/pull/5748", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5748" }
A fix for https://github.com/huggingface/datasets/issues/5739
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5748/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/268/comments
https://api.github.com/repos/huggingface/datasets/issues/268/events
https://github.com/huggingface/datasets/pull/268
637,848,056
MDExOlB1bGxSZXF1ZXN0NDMzNzU5NzQ1
268
add Rotten Tomatoes Movie Review sentences sentiment dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
[ "@jplu @thomwolf @patrickvonplaten @lhoestq -- How do I request reviewers? Thanks." ]
"2020-06-12T15:53:59Z"
"2020-06-18T07:46:24Z"
"2020-06-18T07:46:23Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/268.diff", "html_url": "https://github.com/huggingface/datasets/pull/268", "merged_at": "2020-06-18T07:46:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/268.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/268" }
Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/268/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5848/comments
https://api.github.com/repos/huggingface/datasets/issues/5848/events
https://github.com/huggingface/datasets/pull/5848
1,707,506,734
PR_kwDODunzps5QYa1B
5,848
Add `accelerate` as metric's test dependency to fix CI error
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007565 / 0.011353 (-0.003788) | 0.005361 / 0.011008 (-0.005647) | 0.098963 / 0.038508 (0.060455) | 0.034271 / 0.023109 (0.011162) | 0.323421 / 0.275898 (0.047523) | 0.348495 / 0.323480 (0.025015) | 0.006244 / 0.007986 (-0.001741) | 0.004215 / 0.004328 (-0.000113) | 0.073614 / 0.004250 (0.069364) | 0.049334 / 0.037052 (0.012282) | 0.315277 / 0.258489 (0.056788) | 0.354325 / 0.293841 (0.060484) | 0.035001 / 0.128546 (-0.093545) | 0.012149 / 0.075646 (-0.063497) | 0.335614 / 0.419271 (-0.083657) | 0.050532 / 0.043533 (0.006999) | 0.308500 / 0.255139 (0.053361) | 0.324620 / 0.283200 (0.041421) | 0.110241 / 0.141683 (-0.031442) | 1.443923 / 1.452155 (-0.008232) | 1.559289 / 1.492716 (0.066573) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207629 / 0.018006 (0.189622) | 0.433251 / 0.000490 (0.432762) | 0.003021 / 0.000200 (0.002821) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028312 / 0.037411 (-0.009100) | 0.111829 / 0.014526 (0.097303) | 0.127099 / 0.176557 (-0.049458) | 0.184702 / 0.737135 (-0.552433) | 0.125062 / 0.296338 (-0.171277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399451 / 0.215209 (0.184242) | 3.966528 / 2.077655 (1.888874) | 1.826004 / 1.504120 (0.321884) | 1.669547 / 1.541195 (0.128353) | 1.751584 / 1.468490 (0.283094) | 0.688308 / 4.584777 (-3.896469) | 3.813275 / 3.745712 (0.067562) | 3.181554 / 5.269862 (-2.088307) | 1.750566 / 4.565676 (-2.815111) | 0.085038 / 0.424275 (-0.339237) | 0.011992 / 0.007607 (0.004385) | 0.502374 / 0.226044 (0.276330) | 4.970614 / 2.268929 (2.701686) | 2.309617 / 55.444624 (-53.135007) | 2.012427 / 6.876477 (-4.864050) | 2.156348 / 2.142072 (0.014276) | 0.834415 / 4.805227 (-3.970812) | 0.167912 / 6.500664 (-6.332752) | 0.065711 / 0.075469 (-0.009758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223132 / 1.841788 (-0.618656) | 15.126753 / 8.074308 (7.052445) | 14.829184 / 10.191392 (4.637792) | 0.142582 / 0.680424 (-0.537842) | 0.017483 / 0.534201 (-0.516718) | 0.429768 / 0.579283 (-0.149516) | 0.422745 / 0.434364 (-0.011619) | 0.508813 / 0.540337 (-0.031525) | 0.618716 / 1.386936 (-0.768220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005433 / 0.011008 (-0.005576) | 0.076223 / 0.038508 (0.037715) | 0.036334 / 0.023109 (0.013225) | 0.375339 / 0.275898 (0.099441) | 0.413674 / 0.323480 (0.090194) | 0.006207 / 0.007986 (-0.001778) | 0.004085 / 0.004328 (-0.000244) | 0.076154 / 0.004250 (0.071904) | 0.050324 / 0.037052 (0.013271) | 0.382919 / 0.258489 (0.124429) | 0.442508 / 0.293841 (0.148667) | 0.035951 / 0.128546 (-0.092595) | 0.012067 / 0.075646 (-0.063580) | 0.087649 / 0.419271 (-0.331623) | 0.048786 / 0.043533 (0.005253) | 0.373541 / 0.255139 (0.118402) | 0.400437 / 0.283200 (0.117237) | 0.102622 / 0.141683 (-0.039061) | 1.472443 / 1.452155 (0.020288) | 1.580178 / 1.492716 (0.087462) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222105 / 0.018006 (0.204098) | 0.445465 / 0.000490 (0.444975) | 0.003671 / 0.000200 (0.003471) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030808 / 0.037411 (-0.006603) | 0.116687 / 0.014526 (0.102161) | 0.124972 / 0.176557 (-0.051584) | 0.175621 / 0.737135 (-0.561514) | 0.129029 / 0.296338 (-0.167310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434627 / 0.215209 (0.219418) | 4.330268 / 2.077655 (2.252613) | 2.140266 / 1.504120 (0.636146) | 1.960705 / 1.541195 (0.419510) | 2.035949 / 1.468490 (0.567459) | 0.696830 / 4.584777 (-3.887947) | 3.790468 / 3.745712 (0.044756) | 3.194112 / 5.269862 (-2.075750) | 1.577728 / 4.565676 (-2.987948) | 0.085445 / 0.424275 (-0.338830) | 0.012207 / 0.007607 (0.004600) | 0.555199 / 0.226044 (0.329154) | 5.551539 / 2.268929 (3.282610) | 2.630917 / 55.444624 (-52.813707) | 2.383362 / 6.876477 (-4.493114) | 2.476301 / 2.142072 (0.334229) | 0.845773 / 4.805227 (-3.959455) | 0.169229 / 6.500664 (-6.331435) | 0.066064 / 0.075469 (-0.009405) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277543 / 1.841788 (-0.564245) | 15.775637 / 8.074308 (7.701329) | 13.528588 / 10.191392 (3.337196) | 0.167428 / 0.680424 (-0.512996) | 0.017581 / 0.534201 (-0.516620) | 0.454472 / 0.579283 (-0.124811) | 0.427987 / 0.434364 (-0.006377) | 0.551512 / 0.540337 (0.011175) | 0.650811 / 1.386936 (-0.736125) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#96a6f5f526cc90330df597ae0097274742d5b84f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009800 / 0.011353 (-0.001552) | 0.006443 / 0.011008 (-0.004565) | 0.144137 / 0.038508 (0.105629) | 0.037493 / 0.023109 (0.014383) | 0.482306 / 0.275898 (0.206408) | 0.467625 / 0.323480 (0.144145) | 0.006812 / 0.007986 (-0.001174) | 0.004810 / 0.004328 (0.000481) | 0.109047 / 0.004250 (0.104796) | 0.047169 / 0.037052 (0.010116) | 0.451253 / 0.258489 (0.192764) | 0.511339 / 0.293841 (0.217498) | 0.055583 / 0.128546 (-0.072963) | 0.021810 / 0.075646 (-0.053836) | 0.426522 / 0.419271 (0.007250) | 0.070282 / 0.043533 (0.026749) | 0.469631 / 0.255139 (0.214492) | 0.484951 / 0.283200 (0.201751) | 0.117370 / 0.141683 (-0.024313) | 1.809917 / 1.452155 (0.357763) | 1.882659 / 1.492716 (0.389943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223843 / 0.018006 (0.205837) | 0.549216 / 0.000490 (0.548726) | 0.007120 / 0.000200 (0.006920) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033057 / 0.037411 (-0.004354) | 0.128242 / 0.014526 (0.113716) | 0.140906 / 0.176557 (-0.035650) | 0.213122 / 0.737135 (-0.524013) | 0.148115 / 0.296338 (-0.148224) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638712 / 0.215209 (0.423503) | 6.383684 / 2.077655 (4.306029) | 2.477020 / 1.504120 (0.972900) | 2.129190 / 1.541195 (0.587996) | 2.230503 / 1.468490 (0.762013) | 1.367167 / 4.584777 (-3.217610) | 5.570586 / 3.745712 (1.824873) | 5.462857 / 5.269862 (0.192996) | 2.990604 / 4.565676 (-1.575073) | 0.146543 / 0.424275 (-0.277732) | 0.016060 / 0.007607 (0.008453) | 0.812691 / 0.226044 (0.586646) | 7.928041 / 2.268929 (5.659112) | 3.329494 / 55.444624 (-52.115130) | 2.523452 / 6.876477 (-4.353025) | 2.672374 / 2.142072 (0.530302) | 1.598554 / 4.805227 (-3.206673) | 0.284727 / 6.500664 (-6.215937) | 0.080359 / 0.075469 (0.004889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501112 / 1.841788 (-0.340675) | 17.553644 / 8.074308 (9.479335) | 22.704062 / 10.191392 (12.512670) | 0.225575 / 0.680424 (-0.454849) | 0.026531 / 0.534201 (-0.507670) | 0.520129 / 0.579283 (-0.059154) | 0.626220 / 0.434364 (0.191856) | 0.631740 / 0.540337 (0.091403) | 0.750611 / 1.386936 (-0.636325) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009866 / 0.011353 (-0.001487) | 0.005733 / 0.011008 (-0.005275) | 0.111529 / 0.038508 (0.073021) | 0.042001 / 0.023109 (0.018891) | 0.458578 / 0.275898 (0.182680) | 0.507796 / 0.323480 (0.184316) | 0.006547 / 0.007986 (-0.001438) | 0.005611 / 0.004328 (0.001282) | 0.115321 / 0.004250 (0.111070) | 0.048741 / 0.037052 (0.011689) | 0.447611 / 0.258489 (0.189122) | 0.531830 / 0.293841 (0.237989) | 0.052176 / 0.128546 (-0.076370) | 0.022431 / 0.075646 (-0.053216) | 0.120709 / 0.419271 (-0.298562) | 0.067301 / 0.043533 (0.023769) | 0.460577 / 0.255139 (0.205438) | 0.497805 / 0.283200 (0.214605) | 0.121830 / 0.141683 (-0.019853) | 1.876436 / 1.452155 (0.424281) | 1.983491 / 1.492716 (0.490775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230982 / 0.018006 (0.212976) | 0.540643 / 0.000490 (0.540153) | 0.004646 / 0.000200 (0.004446) | 0.000131 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034230 / 0.037411 (-0.003181) | 0.136454 / 0.014526 (0.121928) | 0.143370 / 0.176557 (-0.033187) | 0.206752 / 0.737135 (-0.530384) | 0.148722 / 0.296338 (-0.147617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.704667 / 0.215209 (0.489458) | 7.112079 / 2.077655 (5.034424) | 3.083916 / 1.504120 (1.579797) | 2.606388 / 1.541195 (1.065193) | 2.738505 / 1.468490 (1.270015) | 1.314897 / 4.584777 (-3.269880) | 5.764442 / 3.745712 (2.018729) | 3.491890 / 5.269862 (-1.777972) | 2.299983 / 4.565676 (-2.265693) | 0.169655 / 0.424275 (-0.254620) | 0.015251 / 0.007607 (0.007643) | 0.977230 / 0.226044 (0.751186) | 9.697773 / 2.268929 (7.428844) | 3.826928 / 55.444624 (-51.617697) | 3.108238 / 6.876477 (-3.768239) | 3.103242 / 2.142072 (0.961169) | 1.586645 / 4.805227 (-3.218582) | 0.287181 / 6.500664 (-6.213483) | 0.107332 / 0.075469 (0.031863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712710 / 1.841788 (-0.129077) | 19.169403 / 8.074308 (11.095095) | 21.777301 / 10.191392 (11.585909) | 0.216918 / 0.680424 (-0.463506) | 0.026551 / 0.534201 (-0.507650) | 0.570383 / 0.579283 (-0.008900) | 0.643885 / 0.434364 (0.209521) | 0.673906 / 0.540337 (0.133568) | 0.824573 / 1.386936 (-0.562363) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ead18b6921c9576a3078d2fb685c38f1e1a4b8a \"CML watermark\")\n" ]
"2023-05-12T12:01:01Z"
"2023-05-12T13:48:47Z"
"2023-05-12T13:39:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5848.diff", "html_url": "https://github.com/huggingface/datasets/pull/5848", "merged_at": "2023-05-12T13:39:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/5848.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5848" }
The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently). Fixes the following [CI error](https://github.com/huggingface/datasets/actions/runs/4950900048/jobs/8855148703?pr=5845).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5848/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5848/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/265/comments
https://api.github.com/repos/huggingface/datasets/issues/265/events
https://github.com/huggingface/datasets/pull/265
637,139,220
MDExOlB1bGxSZXF1ZXN0NDMzMTgxNDMz
265
Add pyarrow warning colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-06-11T15:57:51Z"
"2020-08-02T18:14:36Z"
"2020-06-12T08:14:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/265.diff", "html_url": "https://github.com/huggingface/datasets/pull/265", "merged_at": "2020-06-12T08:14:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/265.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/265" }
When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow. This is an issue because `nlp` requires the updated version to work correctly. In this PR I added en error that is shown to the user in google colab if the user tries to `import nlp` without having restarted the runtime. The error tells the user to restart the runtime.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/265/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/265/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1585/comments
https://api.github.com/repos/huggingface/datasets/issues/1585/events
https://github.com/huggingface/datasets/issues/1585
768,831,171
MDU6SXNzdWU3Njg4MzExNzE=
1,585
FileNotFoundError for `amazon_polarity`
{ "avatar_url": "https://avatars.githubusercontent.com/u/24647404?v=4", "events_url": "https://api.github.com/users/phtephanx/events{/privacy}", "followers_url": "https://api.github.com/users/phtephanx/followers", "following_url": "https://api.github.com/users/phtephanx/following{/other_user}", "gists_url": "https://api.github.com/users/phtephanx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phtephanx", "id": 24647404, "login": "phtephanx", "node_id": "MDQ6VXNlcjI0NjQ3NDA0", "organizations_url": "https://api.github.com/users/phtephanx/orgs", "received_events_url": "https://api.github.com/users/phtephanx/received_events", "repos_url": "https://api.github.com/users/phtephanx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phtephanx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phtephanx/subscriptions", "type": "User", "url": "https://api.github.com/users/phtephanx" }
[]
closed
false
null
[]
null
[ "Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`" ]
"2020-12-16T12:51:05Z"
"2020-12-16T16:02:56Z"
"2020-12-16T16:02:56Z"
NONE
null
null
null
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1585/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1585/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3931/comments
https://api.github.com/repos/huggingface/datasets/issues/3931/events
https://github.com/huggingface/datasets/pull/3931
1,170,097,208
PR_kwDODunzps40fBjx
3,931
Add align_labels_with_mapping docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-15T19:24:57Z"
"2022-03-18T16:28:31Z"
"2022-03-18T16:24:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3931.diff", "html_url": "https://github.com/huggingface/datasets/pull/3931", "merged_at": "2022-03-18T16:24:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3931.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3931" }
This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko 🎉 ). For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3931/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3931/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5154/comments
https://api.github.com/repos/huggingface/datasets/issues/5154/events
https://github.com/huggingface/datasets/pull/5154
1,421,161,992
PR_kwDODunzps5BbpQZ
5,154
Test latest fsspec in CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "actually the latest fsspec is already installed " ]
"2022-10-24T17:18:13Z"
"2023-09-24T10:06:06Z"
"2022-10-25T09:30:45Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5154.diff", "html_url": "https://github.com/huggingface/datasets/pull/5154", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5154.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5154" }
Following the discussion in https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 I think we need to test the latest fsspec in the CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5154/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5154/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4977/comments
https://api.github.com/repos/huggingface/datasets/issues/4977/events
https://github.com/huggingface/datasets/issues/4977
1,372,962,157
I_kwDODunzps5R1b1t
4,977
Providing dataset size
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926", "Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API", "Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests/additional coding) would be really useful :hugs: " ]
"2022-09-14T13:09:27Z"
"2022-09-15T16:03:58Z"
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded). **Describe the solution you'd like** Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some). **Describe alternatives you've considered** People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face: **Additional context** Mentioned to @lhoestq
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4977/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4977/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6222/comments
https://api.github.com/repos/huggingface/datasets/issues/6222/events
https://github.com/huggingface/datasets/pull/6222
1,884,875,510
PR_kwDODunzps5Zup2f
6,222
fix typo in Audio dataset documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/3224332?v=4", "events_url": "https://api.github.com/users/prassanna-ravishankar/events{/privacy}", "followers_url": "https://api.github.com/users/prassanna-ravishankar/followers", "following_url": "https://api.github.com/users/prassanna-ravishankar/following{/other_user}", "gists_url": "https://api.github.com/users/prassanna-ravishankar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prassanna-ravishankar", "id": 3224332, "login": "prassanna-ravishankar", "node_id": "MDQ6VXNlcjMyMjQzMzI=", "organizations_url": "https://api.github.com/users/prassanna-ravishankar/orgs", "received_events_url": "https://api.github.com/users/prassanna-ravishankar/received_events", "repos_url": "https://api.github.com/users/prassanna-ravishankar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prassanna-ravishankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prassanna-ravishankar/subscriptions", "type": "User", "url": "https://api.github.com/users/prassanna-ravishankar" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006655 / 0.011353 (-0.004698) | 0.004115 / 0.011008 (-0.006893) | 0.083895 / 0.038508 (0.045387) | 0.072770 / 0.023109 (0.049661) | 0.311401 / 0.275898 (0.035503) | 0.341079 / 0.323480 (0.017599) | 0.005488 / 0.007986 (-0.002497) | 0.003530 / 0.004328 (-0.000799) | 0.064691 / 0.004250 (0.060441) | 0.053096 / 0.037052 (0.016044) | 0.314969 / 0.258489 (0.056480) | 0.358245 / 0.293841 (0.064404) | 0.030789 / 0.128546 (-0.097757) | 0.008868 / 0.075646 (-0.066779) | 0.288022 / 0.419271 (-0.131249) | 0.052092 / 0.043533 (0.008559) | 0.310061 / 0.255139 (0.054922) | 0.345369 / 0.283200 (0.062170) | 0.024100 / 0.141683 (-0.117582) | 1.520573 / 1.452155 (0.068418) | 1.593750 / 1.492716 (0.101033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242520 / 0.018006 (0.224514) | 0.567963 / 0.000490 (0.567473) | 0.003183 / 0.000200 (0.002983) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029473 / 0.037411 (-0.007939) | 0.083012 / 0.014526 (0.068486) | 0.262386 / 0.176557 (0.085830) | 0.155131 / 0.737135 (-0.582004) | 0.099880 / 0.296338 (-0.196458) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382388 / 0.215209 (0.167179) | 3.816538 / 2.077655 (1.738884) | 1.863422 / 1.504120 (0.359302) | 1.694652 / 1.541195 (0.153457) | 1.738738 / 1.468490 (0.270248) | 0.477073 / 4.584777 (-4.107704) | 3.539244 / 3.745712 (-0.206468) | 3.238469 / 5.269862 (-2.031392) | 2.026154 / 4.565676 (-2.539523) | 0.056111 / 0.424275 (-0.368164) | 0.007615 / 0.007607 (0.000008) | 0.460620 / 0.226044 (0.234576) | 4.596383 / 2.268929 (2.327455) | 2.348645 / 55.444624 (-53.095979) | 1.977465 / 6.876477 (-4.899011) | 2.222828 / 2.142072 (0.080755) | 0.588065 / 4.805227 (-4.217162) | 0.132175 / 6.500664 (-6.368489) | 0.061322 / 0.075469 (-0.014147) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260623 / 1.841788 (-0.581164) | 19.976475 / 8.074308 (11.902167) | 14.346488 / 10.191392 (4.155096) | 0.145614 / 0.680424 (-0.534810) | 0.018309 / 0.534201 (-0.515892) | 0.393644 / 0.579283 (-0.185639) | 0.405355 / 0.434364 (-0.029009) | 0.458355 / 0.540337 (-0.081982) | 0.630147 / 1.386936 (-0.756789) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006769 / 0.011353 (-0.004584) | 0.004172 / 0.011008 (-0.006836) | 0.064863 / 0.038508 (0.026355) | 0.076831 / 0.023109 (0.053722) | 0.419391 / 0.275898 (0.143493) | 0.439912 / 0.323480 (0.116432) | 0.006249 / 0.007986 (-0.001737) | 0.003571 / 0.004328 (-0.000757) | 0.064877 / 0.004250 (0.060626) | 0.056023 / 0.037052 (0.018971) | 0.419899 / 0.258489 (0.161410) | 0.459334 / 0.293841 (0.165493) | 0.032217 / 0.128546 (-0.096329) | 0.008628 / 0.075646 (-0.067019) | 0.071089 / 0.419271 (-0.348183) | 0.047463 / 0.043533 (0.003930) | 0.414961 / 0.255139 (0.159822) | 0.431408 / 0.283200 (0.148209) | 0.022406 / 0.141683 (-0.119277) | 1.511890 / 1.452155 (0.059735) | 1.580268 / 1.492716 (0.087551) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280805 / 0.018006 (0.262799) | 0.553766 / 0.000490 (0.553276) | 0.006155 / 0.000200 (0.005955) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032980 / 0.037411 (-0.004431) | 0.092981 / 0.014526 (0.078456) | 0.108820 / 0.176557 (-0.067737) | 0.161709 / 0.737135 (-0.575426) | 0.109772 / 0.296338 (-0.186566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433659 / 0.215209 (0.218450) | 4.328577 / 2.077655 (2.250923) | 2.316899 / 1.504120 (0.812779) | 2.142645 / 1.541195 (0.601451) | 2.245518 / 1.468490 (0.777028) | 0.489448 / 4.584777 (-4.095329) | 3.630074 / 3.745712 (-0.115638) | 3.322749 / 5.269862 (-1.947112) | 2.062307 / 4.565676 (-2.503370) | 0.058153 / 0.424275 (-0.366122) | 0.007453 / 0.007607 (-0.000154) | 0.507234 / 0.226044 (0.281190) | 5.071830 / 2.268929 (2.802902) | 2.839374 / 55.444624 (-52.605250) | 2.429583 / 6.876477 (-4.446893) | 2.671940 / 2.142072 (0.529868) | 0.588256 / 4.805227 (-4.216972) | 0.135135 / 6.500664 (-6.365530) | 0.060963 / 0.075469 (-0.014506) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337462 / 1.841788 (-0.504326) | 20.292912 / 8.074308 (12.218604) | 14.871809 / 10.191392 (4.680417) | 0.169214 / 0.680424 (-0.511209) | 0.020450 / 0.534201 (-0.513751) | 0.397094 / 0.579283 (-0.182189) | 0.411623 / 0.434364 (-0.022741) | 0.471560 / 0.540337 (-0.068777) | 0.647293 / 1.386936 (-0.739643) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0a068dbf3b446417ffd89d32857608394ec699e6 \"CML watermark\")\n" ]
"2023-09-06T23:17:24Z"
"2023-10-03T14:18:41Z"
"2023-09-07T15:39:09Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6222.diff", "html_url": "https://github.com/huggingface/datasets/pull/6222", "merged_at": "2023-09-07T15:39:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/6222.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6222" }
There is a typo in the section of the documentation dedicated to creating an audio dataset. The Dataset is incorrectly suffixed with a `Config` https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/blob/main/librivox-indonesia.py#L59
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6222/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6222/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/135/comments
https://api.github.com/repos/huggingface/datasets/issues/135/events
https://github.com/huggingface/datasets/pull/135
619,206,708
MDExOlB1bGxSZXF1ZXN0NDE4Nzc4MTMw
135
Fix print statement in READ.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4", "events_url": "https://api.github.com/users/codehunk628/events{/privacy}", "followers_url": "https://api.github.com/users/codehunk628/followers", "following_url": "https://api.github.com/users/codehunk628/following{/other_user}", "gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/codehunk628", "id": 51091425, "login": "codehunk628", "node_id": "MDQ6VXNlcjUxMDkxNDI1", "organizations_url": "https://api.github.com/users/codehunk628/orgs", "received_events_url": "https://api.github.com/users/codehunk628/received_events", "repos_url": "https://api.github.com/users/codehunk628/repos", "site_admin": false, "starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions", "type": "User", "url": "https://api.github.com/users/codehunk628" }
[]
closed
false
null
[]
null
[ "Indeed, thanks!" ]
"2020-05-15T19:52:23Z"
"2020-05-17T12:14:06Z"
"2020-05-17T12:14:05Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/135.diff", "html_url": "https://github.com/huggingface/datasets/pull/135", "merged_at": "2020-05-17T12:14:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/135.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/135" }
print statement was throwing generator object instead of printing names of available datasets/metrics
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/135/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/135/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4475/comments
https://api.github.com/repos/huggingface/datasets/issues/4475/events
https://github.com/huggingface/datasets/pull/4475
1,267,798,451
PR_kwDODunzps45eufw
4,475
Improve error message for missing pacakges from inside dataset script
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I opened a PR before I noticed yours ^^' You can find it here: https://github.com/huggingface/datasets/pull/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas one can run one single `pip install` command with the list of missing dependencies, which is maybe simpler.\r\n\r\nLet me know which one your prefer", "Closing in favor of #4484. " ]
"2022-06-10T16:59:36Z"
"2022-10-06T13:46:26Z"
"2022-06-13T13:16:43Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4475.diff", "html_url": "https://github.com/huggingface/datasets/pull/4475", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4475.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4475" }
Improve the error message for missing packages from inside a dataset script: With this change, the error message for missing packages for `bigbench` looks as follows: ``` ImportError: To be able to use bigbench, you need to install the following dependencies: - 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' ``` And this is how it looked before: ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4475/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4475/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5272/comments
https://api.github.com/repos/huggingface/datasets/issues/5272/events
https://github.com/huggingface/datasets/issues/5272
1,456,940,021
I_kwDODunzps5W1yP1
5,272
Use pyarrow Tensor dtype
{ "avatar_url": "https://avatars.githubusercontent.com/u/18228395?v=4", "events_url": "https://api.github.com/users/franz101/events{/privacy}", "followers_url": "https://api.github.com/users/franz101/followers", "following_url": "https://api.github.com/users/franz101/following{/other_user}", "gists_url": "https://api.github.com/users/franz101/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/franz101", "id": 18228395, "login": "franz101", "node_id": "MDQ6VXNlcjE4MjI4Mzk1", "organizations_url": "https://api.github.com/users/franz101/orgs", "received_events_url": "https://api.github.com/users/franz101/received_events", "repos_url": "https://api.github.com/users/franz101/repos", "site_admin": false, "starred_url": "https://api.github.com/users/franz101/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/franz101/subscriptions", "type": "User", "url": "https://api.github.com/users/franz101" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! We're using the Arrow format for the datasets, and PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694", "@wesm @rok its been around three years. any updates, regarding dataset arrow tensor support? 🙏 I know you must be very busy, would appreciate to learn what is the state of art. I saw the PR is still open [#8510](https://github.com/apache/arrow/pull/8510)", "Hey @franz101 & @lhoestq!\r\nThere is a plan and a PR to create an [ExtensionArray of Tensors](https://github.com/apache/arrow/pull/8510) of equal sizes as well as a plan to do the same for Tensors of different sizes [ARROW-8714](https://issues.apache.org/jira/browse/ARROW-8714).", "The work stalled a little because it was not clear where TensorArray would live. However Arrow community recently agreed to make a [well-known-extension-type document](https://lists.apache.org/thread/sxd5fhc42hb6svs79t3fd79gkqj83pfh) and I would like https://github.com/apache/arrow/pull/8510 to land there and add an implementation to C++/Python + another language. Is that something you would find beneficial to you?", "that is a great update, thank you.\r\nit looks like this feature would benefit datasets implementation of [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/features/features.py#L585-L641). Is that correct @eladsegal @lhoestq?\r\n\r\n", "TensorArray sounds great ! Looking forward to it :)\r\n\r\nWe've had our own ExtensionArray for fixed shape tensors for a while now, hoping to see something more standardized by the arrow community.\r\n\r\nAlso super interested in the extension array for tensors of different sizes cc @mariosasko ", "[FixedShapeTensor ExtensionType](https://github.com/apache/arrow/pull/8510) was merged and will be in Arrow 12.0.0 (release is planned mid April).\r\n", "@rok Thanks for keeping us updated! I think it's best to introduce a new feature type that would use this extension type under the hood. I'll create an issue to discuss the design with the community in the coming days.\r\n\r\nAlso, is there a tentative time frame for the variable-shape Tensor extension type?", "@mariosasko please tag me in the discussion, perhaps I can contribute.\r\n\r\nAs for the [variable shape tensor array](https://github.com/apache/arrow/issues/24868) - I'd be interested in working on it but didn't see much interest in community yet. Are you saying `huggingface/datasets` could use it?", "pyarrow 12 is out 🎉, will have a look if I can work on it for the ExtensionArray", "I think these two issues need to be fixed first on the Arrow side before adding the tensor feature type here: https://github.com/apache/arrow/issues/35573 and https://github.com/apache/arrow/issues/35599.\r\n\r\n@rok We've had a couple of requests for supporting variable-shape tensors on the forum/GH, but I did not manage to find the concrete issues using the search. TF/TFDS (and PyTorch with the `nested_tensor` API) support them, so it makes sense for us to do the same eventually (the Ray project has an [extension](https://github.com/ray-project/ray/blob/42a8d1489b37243f203120899a23d919dc85bf2a/python/ray/air/util/tensor_extensions/arrow.py#L634) type to support this case)", "> @rok We've had a couple of requests for supporting variable-shape tensors on the forum/GH, but I did not manage to find the concrete issues using the search. TF/TFDS (and PyTorch with the `nested_tensor` API) support them, so it makes sense for us to do the same eventually (the Ray project has an [extension](https://github.com/ray-project/ray/blob/42a8d1489b37243f203120899a23d919dc85bf2a/python/ray/air/util/tensor_extensions/arrow.py#L634) type to support this case)\r\n\r\nThat does make sense indeed. We should probably also be careful about memory layout to enable zero-copy interface to TF/PyTorch.", "So there is no way we can use [pyarrow.Tensor](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html#pyarrow.Tensor) ?", "Not with with the Arrow format, and therefore not in `datasets`. But they released a new [FixedShapeTensorArray](https://arrow.apache.org/docs/python/extending_types.html#fixed-size-tensor) to store tensors in Arrow format. We plan to support this in `datasets` at one point !", "There is also an open issue to enable the conversion of `pyarrow.Tensor` to `pyarrow.FixedShapeTensorType`: https://github.com/apache/arrow/issues/35068. This way one could indirectly use `pyarrow.Tensor` in Arrow format.", "We started a [mailing list discussion](https://lists.apache.org/thread/qc9qho0fg5ph1dns4hjq56hp4tj7rk1k) about potential `VariableShapeTensor` extension array, please check it out and give feedback. For more details here's also a PR https://github.com/apache/arrow/pull/37166." ]
"2022-11-20T15:18:41Z"
"2023-08-17T21:09:11Z"
null
NONE
null
null
null
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1","dim2"]) ``` [Apache docs](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html) Maybe this belongs into the pyarrow features / repo. ### Motivation Working with big data, we need to make sure to use the best data structures and IO out there ### Your contribution Can try to a PR if code changes necessary
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5272/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5272/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1591/comments
https://api.github.com/repos/huggingface/datasets/issues/1591/events
https://github.com/huggingface/datasets/issues/1591
769,383,714
MDU6SXNzdWU3NjkzODM3MTQ=
1,591
IWSLT-17 Link Broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZhaofengWu", "id": 11954789, "login": "ZhaofengWu", "node_id": "MDQ6VXNlcjExOTU0Nzg5", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "type": "User", "url": "https://api.github.com/users/ZhaofengWu" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list.", "Closing this since its a duplicate" ]
"2020-12-17T00:46:42Z"
"2020-12-18T08:06:36Z"
"2020-12-18T08:05:28Z"
NONE
null
null
null
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1591/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1591/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/750/comments
https://api.github.com/repos/huggingface/datasets/issues/750/events
https://github.com/huggingface/datasets/issues/750
726,589,446
MDU6SXNzdWU3MjY1ODk0NDY=
750
load_dataset doesn't include `features` in its hash
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
[]
closed
false
null
[]
null
[]
"2020-10-21T15:16:41Z"
"2020-10-29T09:36:01Z"
"2020-10-29T09:36:01Z"
CONTRIBUTOR
null
null
null
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored. Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of: ``` dataset = load_dataset("glue", "mnli") features = dataset["train"].features features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order dataset = load_dataset("glue", "mnli", features=features) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/750/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2484/comments
https://api.github.com/repos/huggingface/datasets/issues/2484/events
https://github.com/huggingface/datasets/issues/2484
919,092,635
MDU6SXNzdWU5MTkwOTI2MzU=
2,484
Implement loading a dataset builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "#self-assign" ]
"2021-06-11T18:47:22Z"
"2021-07-05T10:45:57Z"
"2021-07-05T10:45:57Z"
MEMBER
null
null
null
As discussed with @stas00 and @lhoestq, this would allow things like: ```python from datasets import load_dataset_builder dataset_name = "openwebtext" builder = load_dataset_builder(dataset_name) print(builder.cache_dir) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2484/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5321/comments
https://api.github.com/repos/huggingface/datasets/issues/5321/events
https://github.com/huggingface/datasets/pull/5321
1,471,430,667
PR_kwDODunzps5EEOhE
5,321
Fix loading from HF GCP cache
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126" ]
"2022-12-01T14:39:06Z"
"2022-12-01T16:10:09Z"
"2022-12-01T16:07:02Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5321.diff", "html_url": "https://github.com/huggingface/datasets/pull/5321", "merged_at": "2022-12-01T16:07:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/5321.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5321" }
As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache I fixed it and added an integration test (runs in 10sec)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5321/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5321/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5701/comments
https://api.github.com/repos/huggingface/datasets/issues/5701/events
https://github.com/huggingface/datasets/pull/5701
1,652,931,399
PR_kwDODunzps5NiSCy
5,701
Add Dataset.from_spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mariosasko Would you or another HF datasets maintainer be able to review this, please?", "Amazing ! Great job @maddiedawson \r\n\r\nDo you know if it's possible to also support writing to Parquet using the HF ParquetWriter if `file_format=\"parquet\"` ?\r\n\r\nParquet is often used when people want to stream the data to train models - which is suitable for big datasets. On the other hand Arrow is generally used for local memory mapping with random access.\r\n\r\n> Please note there was a previous PR adding this functionality\r\n\r\nAm I right to say that it uses the spark workers to prepare the Arrow files ? If so this should make the data preparation fast and won't fill up the executor's memory as in the previously proposed PR", "Thanks for taking a look! Unlike the previous PR's approach, this implementation takes advantage of Spark mapping to distribute file writing over multiple tasks. (Also it doesn't load the entire dataset into memory :) )\r\n\r\nSupporting Parquet here sgtm; I'll modify the PR.\r\n\r\nI also updated the PR description with a common Spark-HF use case that we want to improve.", "Hey @albertvillanova @lhoestq , would one of you be able to re-review please? Thank you!", "@lhoestq this is ready for another pass! Thanks so much 🙏 ", "Friendly ping @lhoestq , also cc @polinaeterna who may be able to help take a look?", "Merging `main` into this branch should fix the CI", "Just rebased @lhoestq ", "Thanks @lhoestq ! Is there a way for me to trigger the github workflow myself to triage the test failure? I'm not able to repro the test failures locally.", "There were two test issues in the workflow that I wasn't able to reproduce locally:\r\n\r\n- Python 3.7: createDataFrame fails due to a pickling error. I modified the tests to instead write and read from json files\r\n- Python 3.10: A worker crashes for unknown reasons. I modified the spark setup to explicitly specify local mode in case it was trying to do something else; let's see if that fixes the issue", "Also one more question @lhoestq when is the next datasets release? We're hoping this can make it in", "I just re-ran the CI.\r\nI think we can do a release right after this PR is merged ;)", "Thanks all! @lhoestq could we re-run CI again please? I think we have to disable this feature on python 3.7 due to the pickling error. The other failure was due to https://issues.apache.org/jira/browse/SPARK-30952 so I rewrote the df processing", "Thanks @lhoestq , this is ready for another CI run. I pinned the pyspark version to see if that fixes the pickling issue", "The remaining CI issues have been addressed! They were\r\n\r\n- dill=0.3.1.1 is incompatible with cloudpickle, used by Spark. The min-dependency tests use this dill version, and those were failing. I added a skip-test annotation to skip Spark tests when using this dill version. This shouldn't be a production issue since if users are using that version of dill, they won't really be able to do anything with Spark anyway.\r\n- One of the Spark APIs used in this feature (mapInArrow) is incompatible with Windows. I filed a Spark ticket for the team to investigate. For the tests, I added another annotation to skip Spark tests on Windows. In the next PR (adding streaming mode), we should be able to support Windows since that won't use mapInArrow.\r\n\r\nI ran the CI on my forked branch: https://github.com/maddiedawson/datasets/pull/2 Everything passes except one instance of tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore; it looks like a flake.\r\n\r\n@lhoestq granted that the CI passes here, is this ok to merge and release? We'd like to put out a blog post tomorrow to broadcast this to Spark users!", "Thanks @lhoestq ! Could you help take a look at the error please? Seems unrelated...\r\n\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_multiprocessing_on_disk - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\\\Users\\\\RUNNER~1\\\\AppData\\\\Local\\\\Temp\\\\tmptfnrdj4x\\\\cache-5c5687cf5629c97a_00000_of_00002.arrow'\r\n===== 1 failed, 2152 passed, 23 skipped, 20 warnings in 461.68s (0:07:41) =====", "The blog is live btw! https://www.databricks.com/blog/contributing-spark-loader-for-hugging-face-datasets Hopefully there can be a release today?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012686 / 0.011353 (0.001333) | 0.006051 / 0.011008 (-0.004957) | 0.123057 / 0.038508 (0.084549) | 0.033238 / 0.023109 (0.010128) | 0.388207 / 0.275898 (0.112309) | 0.393972 / 0.323480 (0.070492) | 0.006645 / 0.007986 (-0.001340) | 0.006715 / 0.004328 (0.002386) | 0.098348 / 0.004250 (0.094097) | 0.041410 / 0.037052 (0.004358) | 0.380123 / 0.258489 (0.121634) | 0.427982 / 0.293841 (0.134141) | 0.052194 / 0.128546 (-0.076352) | 0.018775 / 0.075646 (-0.056871) | 0.399063 / 0.419271 (-0.020209) | 0.061019 / 0.043533 (0.017487) | 0.370943 / 0.255139 (0.115804) | 0.398326 / 0.283200 (0.115127) | 0.136893 / 0.141683 (-0.004790) | 1.777431 / 1.452155 (0.325276) | 1.844354 / 1.492716 (0.351638) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267296 / 0.018006 (0.249289) | 0.565133 / 0.000490 (0.564643) | 0.005811 / 0.000200 (0.005611) | 0.000122 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027009 / 0.037411 (-0.010402) | 0.125907 / 0.014526 (0.111381) | 0.122111 / 0.176557 (-0.054445) | 0.189023 / 0.737135 (-0.548112) | 0.140510 / 0.296338 (-0.155829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.589269 / 0.215209 (0.374060) | 6.038038 / 2.077655 (3.960384) | 2.394681 / 1.504120 (0.890561) | 2.099268 / 1.541195 (0.558073) | 2.105146 / 1.468490 (0.636656) | 1.216304 / 4.584777 (-3.368473) | 5.823110 / 3.745712 (2.077397) | 4.999323 / 5.269862 (-0.270539) | 2.781554 / 4.565676 (-1.784122) | 0.148370 / 0.424275 (-0.275905) | 0.015163 / 0.007607 (0.007556) | 0.775153 / 0.226044 (0.549109) | 7.425314 / 2.268929 (5.156385) | 3.320254 / 55.444624 (-52.124370) | 2.718595 / 6.876477 (-4.157881) | 2.696215 / 2.142072 (0.554142) | 1.452249 / 4.805227 (-3.352978) | 0.281355 / 6.500664 (-6.219309) | 0.088146 / 0.075469 (0.012677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.495718 / 1.841788 (-0.346070) | 17.498714 / 8.074308 (9.424405) | 20.109705 / 10.191392 (9.918313) | 0.233053 / 0.680424 (-0.447371) | 0.028336 / 0.534201 (-0.505865) | 0.538146 / 0.579283 (-0.041137) | 0.642106 / 0.434364 (0.207742) | 0.597214 / 0.540337 (0.056876) | 0.732219 / 1.386936 (-0.654717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008153 / 0.011353 (-0.003200) | 0.005605 / 0.011008 (-0.005403) | 0.096159 / 0.038508 (0.057651) | 0.034102 / 0.023109 (0.010992) | 0.428091 / 0.275898 (0.152193) | 0.476535 / 0.323480 (0.153056) | 0.006278 / 0.007986 (-0.001708) | 0.006752 / 0.004328 (0.002424) | 0.100553 / 0.004250 (0.096302) | 0.045546 / 0.037052 (0.008494) | 0.463236 / 0.258489 (0.204747) | 0.502512 / 0.293841 (0.208671) | 0.051014 / 0.128546 (-0.077533) | 0.018499 / 0.075646 (-0.057148) | 0.127587 / 0.419271 (-0.291685) | 0.059254 / 0.043533 (0.015722) | 0.432248 / 0.255139 (0.177109) | 0.462002 / 0.283200 (0.178802) | 0.124918 / 0.141683 (-0.016765) | 1.689740 / 1.452155 (0.237585) | 1.871546 / 1.492716 (0.378830) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274844 / 0.018006 (0.256838) | 0.570522 / 0.000490 (0.570032) | 0.004008 / 0.000200 (0.003808) | 0.000146 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025323 / 0.037411 (-0.012088) | 0.116323 / 0.014526 (0.101797) | 0.129434 / 0.176557 (-0.047122) | 0.187069 / 0.737135 (-0.550067) | 0.134459 / 0.296338 (-0.161880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633551 / 0.215209 (0.418341) | 6.290078 / 2.077655 (4.212423) | 2.692071 / 1.504120 (1.187951) | 2.354344 / 1.541195 (0.813149) | 2.409260 / 1.468490 (0.940770) | 1.270515 / 4.584777 (-3.314261) | 5.552982 / 3.745712 (1.807270) | 3.041417 / 5.269862 (-2.228444) | 1.920634 / 4.565676 (-2.645043) | 0.142500 / 0.424275 (-0.281775) | 0.014378 / 0.007607 (0.006770) | 0.786444 / 0.226044 (0.560399) | 7.711558 / 2.268929 (5.442630) | 3.439688 / 55.444624 (-52.004936) | 2.742314 / 6.876477 (-4.134163) | 2.800531 / 2.142072 (0.658458) | 1.405843 / 4.805227 (-3.399385) | 0.245322 / 6.500664 (-6.255342) | 0.076662 / 0.075469 (0.001193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.592961 / 1.841788 (-0.248827) | 18.165647 / 8.074308 (10.091339) | 20.011433 / 10.191392 (9.820041) | 0.240558 / 0.680424 (-0.439866) | 0.026045 / 0.534201 (-0.508156) | 0.529610 / 0.579283 (-0.049674) | 0.652494 / 0.434364 (0.218130) | 0.612284 / 0.540337 (0.071947) | 0.733180 / 1.386936 (-0.653756) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea251c726c73bd076a1bef7e39e2ac4e97c8d166 \"CML watermark\")\n", "python 3.9.2\r\nGot an error _pickle.PicklingError use Dataset.from_spark.\r\n\r\nDid the dataset import load data from spark dataframe using multi-node Spark cluster\r\ndf = spark.read.parquet(args.input_data).repartition(50)\r\nds = Dataset.from_spark(df, keep_in_memory=True,\r\n cache_dir=\"/pnc-data/data/nuplan/t5_spark/cache_data\")\r\nds.save_to_disk(args.output_data)\r\n\r\nError : \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n", "Hi @yanzia12138 ! Could you open a new issue please and share the full stack trace ? This will help to know what happened exactly" ]
"2023-04-03T23:51:29Z"
"2023-06-16T16:39:32Z"
"2023-04-26T15:43:39Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5701.diff", "html_url": "https://github.com/huggingface/datasets/pull/5701", "merged_at": "2023-04-26T15:43:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5701" }
Adds static method Dataset.from_spark to create datasets from Spark DataFrames. This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train on the dataset. Related issue: https://github.com/huggingface/datasets/issues/5678
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 4, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/5701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5701/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4482/comments
https://api.github.com/repos/huggingface/datasets/issues/4482/events
https://github.com/huggingface/datasets/pull/4482
1,269,237,447
PR_kwDODunzps45jS_c
4,482
Test that TensorFlow is not imported on startup
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Should we close this PR?", "I'm closing this PR. Feel free to reopen it if necessary." ]
"2022-06-13T10:33:49Z"
"2023-10-12T06:31:39Z"
"2023-10-11T09:11:56Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/4482.diff", "html_url": "https://github.com/huggingface/datasets/pull/4482", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4482.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4482" }
TF takes some time to be imported, and also uses some GPU memory. I just added a test to make sure that in the future it's never imported by default when ```python import datasets ``` is called. Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` branch) I'll mark this PR as ready for review once `huggingface_hub` has a new release
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4482/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4482/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1206/comments
https://api.github.com/repos/huggingface/datasets/issues/1206/events
https://github.com/huggingface/datasets/pull/1206
757,952,992
MDExOlB1bGxSZXF1ZXN0NTMzMjE2NDYw
1,206
Adding Enriched WebNLG dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
null
[ "Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD", "Aaaaand it's rebase time - the new one is at #1264 !", "closing this one since a new PR was created" ]
"2020-12-06T15:36:20Z"
"2023-09-24T09:51:43Z"
"2020-12-09T09:40:32Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1206.diff", "html_url": "https://github.com/huggingface/datasets/pull/1206", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1206.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1206" }
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1206/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4377
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4377/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4377/comments
https://api.github.com/repos/huggingface/datasets/issues/4377/events
https://github.com/huggingface/datasets/pull/4377
1,242,746,186
PR_kwDODunzps44K4OY
4,377
Fix checksum and bug in irc_disentangle dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-20T07:29:28Z"
"2022-05-20T09:34:36Z"
"2022-05-20T09:26:32Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4377.diff", "html_url": "https://github.com/huggingface/datasets/pull/4377", "merged_at": "2022-05-20T09:26:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/4377.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4377" }
There was a bug in filepath segment: - wrong: `jkkummerfeld-irc-disentanglement-fd379e9` - right: `jkkummerfeld-irc-disentanglement-35f0a40` Also there was a bug in the checksum of the downloaded file. This PR fixes these issues. Fix partially #4376.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4377/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4377/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/492/comments
https://api.github.com/repos/huggingface/datasets/issues/492/events
https://github.com/huggingface/datasets/issues/492
676,495,064
MDU6SXNzdWU2NzY0OTUwNjQ=
492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
[ "In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.", "Or maybe the assertion comes from elsewhere ?", "I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas.\r\n\r\nSince `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There is information in a schema which is not stored in features.", "I'm doing a refactor of type inference in #363 . Both text fields should match after that", "By default nullable will be set to True", "It should be good now. I was able to run\r\n\r\n```python\r\n>>> from nlp import concatenate_datasets, load_dataset\r\n>>>\r\n>>> bookcorpus = load_dataset(\"bookcorpus\", split=\"train\")\r\n>>> wiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\r\n>>> wiki.remove_columns_(\"title\") # only keep the text\r\n>>>\r\n>>> assert bookcorpus.features.type == wiki.features.type\r\n>>> bert_dataset = concatenate_datasets([bookcorpus, wiki])\r\n```", "Thanks!" ]
"2020-08-11T00:27:46Z"
"2020-08-26T16:17:19Z"
"2020-08-26T16:17:19Z"
CONTRIBUTOR
null
null
null
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dset = nlp.concatenate_datasets([dset_wikipedia, dset_books]) ``` This fails because they have different schemas, despite having identical features. ```python assert dset_wikipedia.features == dset_books.features # True assert dset_wikipedia._data.schema == dset_books._data.schema # False ``` The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves. ```python dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/492/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3314/comments
https://api.github.com/repos/huggingface/datasets/issues/3314/events
https://github.com/huggingface/datasets/pull/3314
1,061,448,227
PR_kwDODunzps4u6mdX
3,314
Adding arg to pass process rank to `map`
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
null
[ "Some commits seem to be there twice (made the mistake of rebasing because I wasn't sure whether the doc had changed), is this an issue @lhoestq ?" ]
"2021-11-23T15:55:21Z"
"2021-11-24T11:54:13Z"
"2021-11-24T11:54:13Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3314.diff", "html_url": "https://github.com/huggingface/datasets/pull/3314", "merged_at": "2021-11-24T11:54:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3314.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3314" }
This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll add a multi-GPU example to the doc (+ write a bit in the doc for the new arg)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3314/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3314/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4489/comments
https://api.github.com/repos/huggingface/datasets/issues/4489/events
https://github.com/huggingface/datasets/pull/4489
1,270,706,195
PR_kwDODunzps45oONF
4,489
Add SV-Ident dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4", "events_url": "https://api.github.com/users/e-tornike/events{/privacy}", "followers_url": "https://api.github.com/users/e-tornike/followers", "following_url": "https://api.github.com/users/e-tornike/following{/other_user}", "gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/e-tornike", "id": 20404466, "login": "e-tornike", "node_id": "MDQ6VXNlcjIwNDA0NDY2", "organizations_url": "https://api.github.com/users/e-tornike/orgs", "received_events_url": "https://api.github.com/users/e-tornike/received_events", "repos_url": "https://api.github.com/users/e-tornike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions", "type": "User", "url": "https://api.github.com/users/e-tornike" }
[]
closed
false
null
[]
null
[ "Hi @e-tornike, thanks a lot for adding this interesting dataset.\r\n\r\nRecently at Hugging Face, we have decided to give priority to adding datasets directly on the Hub. Would you mind to transfer your loading script to the Hub? You could create a dedicated org namespace, so that your dataset would be accessible using `vadis/sv_ident` or `sdproc/sv_ident` or `coling/sv_ident` (as you prefer).\r\n\r\nYou have an example here: https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus", "Additionally, please feel free to ping us if you need assistance/help in creating this dataset.\r\n\r\nYou could put the link to your Hub dataset here in this Issue discussion page, so that we can follow the progress. :)", "Hi @albertvillanova, thanks for the feedback! Uploading via the Hub is a lot easier! \r\n\r\nI've uploaded the dataset here: https://huggingface.co/datasets/vadis/sv-ident, but it's reporting a \"Status400Error\". Is there any way to see the logs of the dataset script and what is causing the error?", "Hi @e-tornike, good job at https://huggingface.co/datasets/vadis/sv-ident.\r\n\r\nNormally, you can run locally the loading of the dataset by passing `streaming=True` (as the previewer does):\r\n```python\r\nds = load_dataset(\"path/to/sv_ident.py, split=\"train\", streaming=True)\r\nitem = next(iter(ds))\r\nitem\r\n```\r\n\r\nLet me have a look and open a discussion on your Hub repo! ;)", "I've opened an Issue: \r\n- #4527 " ]
"2022-06-14T12:09:00Z"
"2022-06-20T08:48:26Z"
"2022-06-20T08:37:27Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4489.diff", "html_url": "https://github.com/huggingface/datasets/pull/4489", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4489.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4489" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4489/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4489/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/255/comments
https://api.github.com/repos/huggingface/datasets/issues/255/events
https://github.com/huggingface/datasets/pull/255
635,300,822
MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0
255
Add dataset/piaf
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
[]
closed
false
null
[]
null
[ "Very nice !" ]
"2020-06-09T10:16:01Z"
"2020-06-12T08:31:27Z"
"2020-06-12T08:31:27Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/255.diff", "html_url": "https://github.com/huggingface/datasets/pull/255", "merged_at": "2020-06-12T08:31:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/255" }
Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/255/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/255/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2204/comments
https://api.github.com/repos/huggingface/datasets/issues/2204/events
https://github.com/huggingface/datasets/pull/2204
855,144,431
MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2
2,204
Add configurable options to `seqeval` metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marrodion", "id": 44571847, "login": "marrodion", "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "organizations_url": "https://api.github.com/users/marrodion/orgs", "received_events_url": "https://api.github.com/users/marrodion/received_events", "repos_url": "https://api.github.com/users/marrodion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "type": "User", "url": "https://api.github.com/users/marrodion" }
[]
closed
false
null
[]
null
[]
"2021-04-10T19:58:19Z"
"2021-04-15T13:49:46Z"
"2021-04-15T13:49:46Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2204.diff", "html_url": "https://github.com/huggingface/datasets/pull/2204", "merged_at": "2021-04-15T13:49:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2204.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2204" }
Fixes #2148 Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered. `seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2204/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1134/comments
https://api.github.com/repos/huggingface/datasets/issues/1134/events
https://github.com/huggingface/datasets/pull/1134
757,317,651
MDExOlB1bGxSZXF1ZXN0NTMyNzE0MjQ2
1,134
adding xquad-r dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manandey", "id": 6687858, "login": "manandey", "node_id": "MDQ6VXNlcjY2ODc4NTg=", "organizations_url": "https://api.github.com/users/manandey/orgs", "received_events_url": "https://api.github.com/users/manandey/received_events", "repos_url": "https://api.github.com/users/manandey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "type": "User", "url": "https://api.github.com/users/manandey" }
[]
closed
false
null
[]
null
[]
"2020-12-04T18:39:13Z"
"2020-12-05T16:50:47Z"
"2020-12-05T16:50:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1134.diff", "html_url": "https://github.com/huggingface/datasets/pull/1134", "merged_at": "2020-12-05T16:50:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/1134.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1134" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1134/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1134/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/702/comments
https://api.github.com/repos/huggingface/datasets/issues/702/events
https://github.com/huggingface/datasets/pull/702
713,499,628
MDExOlB1bGxSZXF1ZXN0NDk2ODA3Mjg4
702
Complete rouge kwargs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-10-02T09:59:01Z"
"2020-10-02T10:11:04Z"
"2020-10-02T10:11:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/702.diff", "html_url": "https://github.com/huggingface/datasets/pull/702", "merged_at": "2020-10-02T10:11:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/702.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/702" }
In #701 we noticed that some kwargs were missing for rouge
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/702/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/702/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2021/comments
https://api.github.com/repos/huggingface/datasets/issues/2021/events
https://github.com/huggingface/datasets/issues/2021
826,988,016
MDU6SXNzdWU4MjY5ODgwMTY=
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching." ]
"2021-03-10T02:48:34Z"
"2021-03-13T10:07:41Z"
"2021-03-13T10:07:41Z"
NONE
null
null
null
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a serious issue with cashing.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2021/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5933/comments
https://api.github.com/repos/huggingface/datasets/issues/5933/events
https://github.com/huggingface/datasets/pull/5933
1,747,382,500
PR_kwDODunzps5Sfi5J
5,933
Fix `to_numpy` when None values in the sequence
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
[]
closed
false
null
[]
null
[ "I just added the same test with dynamic shape", "_The documentation is not available anymore as the PR was closed or merged._", "Awesome ! I'm merging now if you don't mind :)\r\nWe should probably give you permissions to merge your own PRs when you have an approval", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009980 / 0.011353 (-0.001373) | 0.005709 / 0.011008 (-0.005300) | 0.132185 / 0.038508 (0.093677) | 0.039299 / 0.023109 (0.016190) | 0.400168 / 0.275898 (0.124270) | 0.470582 / 0.323480 (0.147102) | 0.007753 / 0.007986 (-0.000233) | 0.005196 / 0.004328 (0.000868) | 0.093698 / 0.004250 (0.089448) | 0.052631 / 0.037052 (0.015579) | 0.430347 / 0.258489 (0.171858) | 0.460162 / 0.293841 (0.166321) | 0.057511 / 0.128546 (-0.071035) | 0.013944 / 0.075646 (-0.061702) | 0.459008 / 0.419271 (0.039737) | 0.075532 / 0.043533 (0.031999) | 0.405165 / 0.255139 (0.150026) | 0.456142 / 0.283200 (0.172942) | 0.117309 / 0.141683 (-0.024374) | 1.945787 / 1.452155 (0.493633) | 2.067162 / 1.492716 (0.574446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285755 / 0.018006 (0.267749) | 0.619965 / 0.000490 (0.619476) | 0.005071 / 0.000200 (0.004871) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031112 / 0.037411 (-0.006299) | 0.128514 / 0.014526 (0.113988) | 0.137161 / 0.176557 (-0.039396) | 0.211363 / 0.737135 (-0.525772) | 0.151045 / 0.296338 (-0.145293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.609361 / 0.215209 (0.394152) | 6.124844 / 2.077655 (4.047189) | 2.440757 / 1.504120 (0.936637) | 2.034495 / 1.541195 (0.493300) | 2.047192 / 1.468490 (0.578702) | 0.883171 / 4.584777 (-3.701606) | 5.470552 / 3.745712 (1.724840) | 4.401696 / 5.269862 (-0.868165) | 2.378674 / 4.565676 (-2.187003) | 0.108065 / 0.424275 (-0.316210) | 0.013239 / 0.007607 (0.005632) | 0.830957 / 0.226044 (0.604913) | 8.090659 / 2.268929 (5.821731) | 3.289203 / 55.444624 (-52.155422) | 2.500777 / 6.876477 (-4.375700) | 2.561440 / 2.142072 (0.419367) | 1.064893 / 4.805227 (-3.740334) | 0.220486 / 6.500664 (-6.280178) | 0.079507 / 0.075469 (0.004038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544334 / 1.841788 (-0.297454) | 17.878997 / 8.074308 (9.804689) | 18.952191 / 10.191392 (8.760799) | 0.245166 / 0.680424 (-0.435258) | 0.028022 / 0.534201 (-0.506179) | 0.517828 / 0.579283 (-0.061455) | 0.618988 / 0.434364 (0.184624) | 0.589742 / 0.540337 (0.049405) | 0.670902 / 1.386936 (-0.716034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009616 / 0.011353 (-0.001737) | 0.006098 / 0.011008 (-0.004911) | 0.100301 / 0.038508 (0.061793) | 0.037792 / 0.023109 (0.014683) | 0.484667 / 0.275898 (0.208769) | 0.519286 / 0.323480 (0.195806) | 0.007427 / 0.007986 (-0.000558) | 0.007172 / 0.004328 (0.002844) | 0.104429 / 0.004250 (0.100179) | 0.056567 / 0.037052 (0.019515) | 0.502641 / 0.258489 (0.244152) | 0.549629 / 0.293841 (0.255788) | 0.049574 / 0.128546 (-0.078972) | 0.015223 / 0.075646 (-0.060424) | 0.113947 / 0.419271 (-0.305324) | 0.064585 / 0.043533 (0.021053) | 0.512962 / 0.255139 (0.257823) | 0.507218 / 0.283200 (0.224019) | 0.122194 / 0.141683 (-0.019488) | 1.927821 / 1.452155 (0.475667) | 2.051161 / 1.492716 (0.558445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291350 / 0.018006 (0.273344) | 0.588099 / 0.000490 (0.587610) | 0.001368 / 0.000200 (0.001168) | 0.000153 / 0.000054 (0.000099) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030604 / 0.037411 (-0.006807) | 0.126810 / 0.014526 (0.112285) | 0.139309 / 0.176557 (-0.037248) | 0.208030 / 0.737135 (-0.529105) | 0.138985 / 0.296338 (-0.157353) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.681254 / 0.215209 (0.466045) | 6.753856 / 2.077655 (4.676201) | 2.780704 / 1.504120 (1.276585) | 2.475205 / 1.541195 (0.934010) | 2.486784 / 1.468490 (1.018294) | 0.879223 / 4.584777 (-3.705554) | 5.662294 / 3.745712 (1.916582) | 2.698705 / 5.269862 (-2.571156) | 1.660620 / 4.565676 (-2.905057) | 0.112218 / 0.424275 (-0.312057) | 0.014211 / 0.007607 (0.006604) | 0.796957 / 0.226044 (0.570913) | 8.180897 / 2.268929 (5.911969) | 3.540419 / 55.444624 (-51.904205) | 2.899467 / 6.876477 (-3.977010) | 2.870306 / 2.142072 (0.728233) | 1.069537 / 4.805227 (-3.735690) | 0.211281 / 6.500664 (-6.289383) | 0.078898 / 0.075469 (0.003429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.666790 / 1.841788 (-0.174998) | 18.302127 / 8.074308 (10.227819) | 21.317546 / 10.191392 (11.126153) | 0.242795 / 0.680424 (-0.437629) | 0.026754 / 0.534201 (-0.507447) | 0.493375 / 0.579283 (-0.085908) | 0.605400 / 0.434364 (0.171036) | 0.586888 / 0.540337 (0.046550) | 0.722809 / 1.386936 (-0.664127) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce2328e7b1d62998b22510492530af55d4493b73 \"CML watermark\")\n" ]
"2023-06-08T08:38:56Z"
"2023-06-09T13:49:41Z"
"2023-06-09T13:23:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5933.diff", "html_url": "https://github.com/huggingface/datasets/pull/5933", "merged_at": "2023-06-09T13:23:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/5933.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5933" }
Closes #5927 I've realized that the error was overlooked during testing due to the presence of only one None value in the sequence. Unfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests to include sequences with multiple None values.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5933/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5933/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3933/comments
https://api.github.com/repos/huggingface/datasets/issues/3933/events
https://github.com/huggingface/datasets/pull/3933
1,170,253,605
PR_kwDODunzps40flNM
3,933
Update README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-15T20:52:05Z"
"2022-03-17T17:51:24Z"
"2022-03-17T17:47:37Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3933.diff", "html_url": "https://github.com/huggingface/datasets/pull/3933", "merged_at": "2022-03-17T17:47:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3933.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3933" }
Fixing missing triple quote
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3933/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3933/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5295/comments
https://api.github.com/repos/huggingface/datasets/issues/5295/events
https://github.com/huggingface/datasets/issues/5295
1,464,006,743
I_kwDODunzps5XQvhX
5,295
Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode)
{ "avatar_url": "https://avatars.githubusercontent.com/u/2340781?v=4", "events_url": "https://api.github.com/users/verdimrc/events{/privacy}", "followers_url": "https://api.github.com/users/verdimrc/followers", "following_url": "https://api.github.com/users/verdimrc/following{/other_user}", "gists_url": "https://api.github.com/users/verdimrc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/verdimrc", "id": 2340781, "login": "verdimrc", "node_id": "MDQ6VXNlcjIzNDA3ODE=", "organizations_url": "https://api.github.com/users/verdimrc/orgs", "received_events_url": "https://api.github.com/users/verdimrc/received_events", "repos_url": "https://api.github.com/users/verdimrc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/verdimrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/verdimrc/subscriptions", "type": "User", "url": "https://api.github.com/users/verdimrc" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Hi ! Thanks for reporting. Indeed the lock file should be placed in a directory with write permission (e.g. in the directory where the archive is extracted).", "I opened https://github.com/huggingface/datasets/pull/5320 to fix this - it places the lock file in the cache directory instead of trying to put in next to the ZIP where it's read-only" ]
"2022-11-25T03:59:43Z"
"2023-07-21T14:39:09Z"
"2023-07-21T14:39:09Z"
NONE
null
null
null
### Describe the bug Hi, `load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file. Encountered this when attempting `load_dataset()` on a datadir with SageMaker FastFile mode. ### Steps to reproduce the bug ```python # Showing relevant lines only. hyperparameters = { "dataset_name": "ydshieh/coco_dataset_script", "dataset_config_name": 2017, "data_dir": "/opt/ml/input/data/coco", "cache_dir": "/tmp/huggingface-cache", # Fix dataset complains out-of-space. ... } estimator = PyTorch( base_job_name="clip", source_dir="../src/sm-entrypoint", entry_point="run_clip.py", # Transformers/src/examples/pytorch/contrastive-image-text/run_clip.py framework_version="1.12", py_version="py38", hyperparameters=hyperparameters, instance_count=1, instance_type="ml.p3.16xlarge", volume_size=100, distribution={"smdistributed": {"dataparallel": {"enabled": True}}}, ) fast_file = lambda x: TrainingInput(x, input_mode='FastFile') estimator.fit( { "pre-trained": fast_file("s3://vm-sagemakerr-us-east-1/clip/pre-trained-checkpoint/"), "coco": fast_file("s3://vm-sagemakerr-us-east-1/clip/coco-zip-files/"), } ) ``` Error message: ```text ErrorMessage "OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock' """ The above exception was the direct cause of the following exception Traceback (most recent call last) File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/mpi4py/__main__.py", line 7, in <module> main() File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 198, in main run_command_line(args) File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 47, in run_command_line run_path(sys.argv[0], run_name='__main__') File "/opt/conda/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/opt/conda/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "run_clip_smddp.py", line 594, in <module> File "run_clip_smddp.py", line 327, in main dataset = load_dataset( File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/ydshieh--coco_dataset_script/e033205c0266a54c10be132f9264f2a39dcf893e798f6756d224b1ff5078998f/coco_dataset_script.py", line 123, in _split_generators archive_path = dl_manager.download_and_extract(_DL_URLS) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 419, in extract extracted_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 472, in map_nested mapped = pool.map(_single_map_nested, split_kwds) File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 771, in get raise self._value OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'" ``` ### Expected behavior `load_dataset()` to succeed, just like when .zip file is passed in SageMaker File mode. ### Environment info * datasets-2.7.1 * transformers-4.24.0 * python-3.8 * torch-1.12 * SageMaker PyTorch DLC
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5295/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5295/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3980/comments
https://api.github.com/repos/huggingface/datasets/issues/3980/events
https://github.com/huggingface/datasets/pull/3980
1,175,412,905
PR_kwDODunzps40vdcH
3,980
Add tip on how to speed up loading with ImageFolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for adding that tip! 👍 \r\n\r\nFor the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,`cast_column`) instead of the full path which can be a bit lengthy for some functions like `datasets.IterableDataset.remove_columns` (and if we like this idea, we can align the rest of the docs on it). ", "> For the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,cast_column) instead of the full path which can be a bit lengthy for some functions like datasets.IterableDataset.remove_columns (and if we like this idea, we can align the rest of the docs on it).\r\n\r\nThat's also OK, as long as we are consistent.\r\n\r\n@lhoestq @albertvillanova @polinaeterna Which one of these two styles do you prefer?", "Agree on hiding `datasets` name. Not sure about hiding class name as it's anyway not visible for users if they use `Dataset.cast_column` or `IterableDataset.cast_column` when working with their datasets. But I agree that the most important thing is to be consistent :)", "Good points! :)\r\n\r\nI think it'll be good to show the class name since some functions have different parameters. For example, if users click on `IterableDataset.map` and then `Dataset.map`, they'll see different parameters and have to figure out why (which isn't too difficult I guess lol). But showing the class name avoids any confusion upfront. " ]
"2022-03-21T13:45:58Z"
"2022-03-22T13:39:45Z"
"2022-03-22T13:34:56Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3980.diff", "html_url": "https://github.com/huggingface/datasets/pull/3980", "merged_at": "2022-03-22T13:34:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3980.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3980" }
This PR does two things: * adds a tip on how to speed up loading of a large number of files with ImageFolder (motivated by [this issue](https://github.com/huggingface/datasets/issues/3960)) * replaces the current references to the `Dataset` methods in the Image Processing doc with their fully qualified counterparts (to align it with the Audio Processing doc) cc @stevhliu
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3980/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3980/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/165/comments
https://api.github.com/repos/huggingface/datasets/issues/165/events
https://github.com/huggingface/datasets/issues/165
620,758,221
MDU6SXNzdWU2MjA3NTgyMjE=
165
ANLI
{ "avatar_url": "https://avatars.githubusercontent.com/u/6024930?v=4", "events_url": "https://api.github.com/users/douwekiela/events{/privacy}", "followers_url": "https://api.github.com/users/douwekiela/followers", "following_url": "https://api.github.com/users/douwekiela/following{/other_user}", "gists_url": "https://api.github.com/users/douwekiela/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/douwekiela", "id": 6024930, "login": "douwekiela", "node_id": "MDQ6VXNlcjYwMjQ5MzA=", "organizations_url": "https://api.github.com/users/douwekiela/orgs", "received_events_url": "https://api.github.com/users/douwekiela/received_events", "repos_url": "https://api.github.com/users/douwekiela/repos", "site_admin": false, "starred_url": "https://api.github.com/users/douwekiela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/douwekiela/subscriptions", "type": "User", "url": "https://api.github.com/users/douwekiela" }
[]
closed
false
null
[]
null
[]
"2020-05-19T07:50:57Z"
"2020-05-20T12:23:07Z"
"2020-05-20T12:23:07Z"
NONE
null
null
null
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART". The current naming will confuse people :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/165/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/165/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/584/comments
https://api.github.com/repos/huggingface/datasets/issues/584/events
https://github.com/huggingface/datasets/pull/584
695,186,652
MDExOlB1bGxSZXF1ZXN0NDgxNDY0NjEz
584
Use github versioning
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I noticed that datasets like `cnn_dailymail` need the `version` parameter to be passed to its `config_kwargs`.\r\nShall we rename the `version` paramater in `load_dataset` ? Maybe `repo_version` or `script_version` ?" ]
"2020-09-07T14:58:15Z"
"2020-09-09T13:37:35Z"
"2020-09-09T13:37:34Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/584.diff", "html_url": "https://github.com/huggingface/datasets/pull/584", "merged_at": "2020-09-09T13:37:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/584.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/584" }
Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version. To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certain version of the lib, as in #562
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/584/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/584/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4961/comments
https://api.github.com/repos/huggingface/datasets/issues/4961/events
https://github.com/huggingface/datasets/issues/4961
1,368,124,033
I_kwDODunzps5Ri-qB
4,961
fsspec 2022.8.2 breaks xopen in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DCNemesis", "id": 3616964, "login": "DCNemesis", "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "repos_url": "https://api.github.com/users/DCNemesis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "type": "User", "url": "https://api.github.com/users/DCNemesis" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.", "Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.", "Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https://github.com/huggingface/transformers/pull/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n", "@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https://github.com/googlecolab/colabtools/issues/3055) on their end too. ", "Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.", "Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https://github.com/googlecolab/colabtools/issues/3055#issuecomment-1244019010" ]
"2022-09-09T17:26:55Z"
"2022-09-12T17:45:50Z"
"2022-09-12T14:32:05Z"
NONE
null
null
null
## Describe the bug When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable. ## Steps to reproduce the bug ```python import datasets data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True) ``` ## Expected results Dataset should load as iterator. ## Actual results ``` [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1737 # Return iterable dataset in case of streaming 1738 if streaming: -> 1739 return builder_instance.as_streaming_dataset(split=split) 1740 1741 # Some datasets are already processed on the HF google storage [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path) 1023 ) 1024 self._check_manual_download(dl_manager) -> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 1026 # By default, return all splits 1027 if split is None: [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split) 267 # for streaming case 268 def _download_audio_archives(dl_manager, lang, format, split): --> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split) 270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths] [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split) 251 n_files_path = dl_manager.download(n_files_url) 252 --> 253 with open(n_files_path, "r", encoding="utf-8") as file: 254 n_files = int(file.read().strip()) # the file contains a number of archives 255 ValueError: I/O operation on closed file. ``` ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4961/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1133/comments
https://api.github.com/repos/huggingface/datasets/issues/1133/events
https://github.com/huggingface/datasets/pull/1133
757,307,660
MDExOlB1bGxSZXF1ZXN0NTMyNzA1ODQ4
1,133
Adding XQUAD-R Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manandey", "id": 6687858, "login": "manandey", "node_id": "MDQ6VXNlcjY2ODc4NTg=", "organizations_url": "https://api.github.com/users/manandey/orgs", "received_events_url": "https://api.github.com/users/manandey/received_events", "repos_url": "https://api.github.com/users/manandey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "type": "User", "url": "https://api.github.com/users/manandey" }
[]
closed
false
null
[]
null
[]
"2020-12-04T18:22:29Z"
"2020-12-04T18:28:54Z"
"2020-12-04T18:28:49Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1133.diff", "html_url": "https://github.com/huggingface/datasets/pull/1133", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1133.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1133" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1133/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1133/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/308/comments
https://api.github.com/repos/huggingface/datasets/issues/308/events
https://github.com/huggingface/datasets/pull/308
644,195,251
MDExOlB1bGxSZXF1ZXN0NDM4ODYyMzYy
308
Specify utf-8 encoding for MRPC files
{ "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patpizio", "id": 15801338, "login": "patpizio", "node_id": "MDQ6VXNlcjE1ODAxMzM4", "organizations_url": "https://api.github.com/users/patpizio/orgs", "received_events_url": "https://api.github.com/users/patpizio/received_events", "repos_url": "https://api.github.com/users/patpizio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "type": "User", "url": "https://api.github.com/users/patpizio" }
[]
closed
false
null
[]
null
[]
"2020-06-23T22:44:36Z"
"2020-06-25T12:52:21Z"
"2020-06-25T12:16:10Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/308.diff", "html_url": "https://github.com/huggingface/datasets/pull/308", "merged_at": "2020-06-25T12:16:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/308.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/308" }
Fixes #307, again probably a Windows-related issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/308/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/308/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5783/comments
https://api.github.com/repos/huggingface/datasets/issues/5783/events
https://github.com/huggingface/datasets/issues/5783
1,679,664,393
I_kwDODunzps5kHaUJ
5,783
Offset overflow while doing regex on a text column
{ "avatar_url": "https://avatars.githubusercontent.com/u/5066268?v=4", "events_url": "https://api.github.com/users/nishanthcgit/events{/privacy}", "followers_url": "https://api.github.com/users/nishanthcgit/followers", "following_url": "https://api.github.com/users/nishanthcgit/following{/other_user}", "gists_url": "https://api.github.com/users/nishanthcgit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nishanthcgit", "id": 5066268, "login": "nishanthcgit", "node_id": "MDQ6VXNlcjUwNjYyNjg=", "organizations_url": "https://api.github.com/users/nishanthcgit/orgs", "received_events_url": "https://api.github.com/users/nishanthcgit/received_events", "repos_url": "https://api.github.com/users/nishanthcgit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nishanthcgit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nishanthcgit/subscriptions", "type": "User", "url": "https://api.github.com/users/nishanthcgit" }
[]
open
false
null
[]
null
[ "Hi! This looks like an Arrow bug, but it can be avoided by reducing the `writer_batch_size`.\r\n\r\n(`ds = ds.map(get_text_caption, writer_batch_size=100)` in Colab runs without issues)\r\n", "@mariosasko I ran into this problem with load_dataset. What should I do", "@AisingioroHao0 You can also pass the `writer_batch_size` parameter to `load_dataset`, e.g., `load_dataset(\"mnist\", writer_batch_size=100)`", "@mariosasko How do I determine the optimal size of write_batch_size? My training is sometimes fast and sometimes slow. Is it because write_batch_size is too small? Each batch of the current dataloader should be the same size. I preprocessed the dataset using map", "@aihao2000 It's unlikely `writer_batch_size` is the problem. You can use the following code to profile the training loop (e.g., on a subset of data) and find slow parts:\r\n```python\r\nimport cProfile, pstats\r\n\r\nwith cProfile.Profile() as profiler:\r\n ... # training loop code\r\n\r\nstats = pstats.Stats(profiler).sort_stats(\"cumtime\")\r\nstats.print_stats()\r\n```\r\n", "@nishanthcgit ok,thanks.Recently I found dataset.with_transform to be faster and more stable with multiple processes", "@mariosasko Is the larger the num_proc of load_dataset within the number of cpu cores, the better? Then the num_proc of data_loader is the number of cpu cores/number of training processes" ]
"2023-04-22T19:12:03Z"
"2023-09-22T06:44:07Z"
null
NONE
null
null
null
### Describe the bug `ArrowInvalid: offset overflow while concatenating arrays` Same error as [here](https://github.com/huggingface/datasets/issues/615) ### Steps to reproduce the bug Steps to reproduce: (dataset is a few GB big so try in colab maybe) ``` import datasets import re ds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train') def get_text_caption(example): regex_pattern = r'\s\d+x\d+|,\sLQ|,\sgrid|\.\w+$' example['text_caption'] = re.sub(regex_pattern, '', example['picture_text']) return example ds = ds.map(get_text_caption) ``` I am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up. ### Expected behavior Dataset should have a new column with processed text ### Environment info Datasets version - 2.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5783/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5783/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3228/comments
https://api.github.com/repos/huggingface/datasets/issues/3228/events
https://github.com/huggingface/datasets/pull/3228
1,046,702,143
PR_kwDODunzps4uMJ58
3,228
Add CITATION file
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-11-07T09:40:19Z"
"2021-11-07T09:51:47Z"
"2021-11-07T09:51:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3228.diff", "html_url": "https://github.com/huggingface/datasets/pull/3228", "merged_at": "2021-11-07T09:51:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3228.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3228" }
Add CITATION file.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3228/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3228/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1278/comments
https://api.github.com/repos/huggingface/datasets/issues/1278/events
https://github.com/huggingface/datasets/pull/1278
758,988,465
MDExOlB1bGxSZXF1ZXN0NTM0MDYwNDY5
1,278
Craigslist bargains
{ "avatar_url": "https://avatars.githubusercontent.com/u/7950786?v=4", "events_url": "https://api.github.com/users/ZacharySBrown/events{/privacy}", "followers_url": "https://api.github.com/users/ZacharySBrown/followers", "following_url": "https://api.github.com/users/ZacharySBrown/following{/other_user}", "gists_url": "https://api.github.com/users/ZacharySBrown/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZacharySBrown", "id": 7950786, "login": "ZacharySBrown", "node_id": "MDQ6VXNlcjc5NTA3ODY=", "organizations_url": "https://api.github.com/users/ZacharySBrown/orgs", "received_events_url": "https://api.github.com/users/ZacharySBrown/received_events", "repos_url": "https://api.github.com/users/ZacharySBrown/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZacharySBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZacharySBrown/subscriptions", "type": "User", "url": "https://api.github.com/users/ZacharySBrown" }
[]
closed
false
null
[]
null
[ "Seeing this in the CircleCI builds, this is what I was originally getting before I started messing around with the download URLS to try to fix this:\r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpwvji917g/extracted/d6185140afb24ad8fee67392100a478269cba286b0d88915a137fdf88872de14/dummy_data/train__VARIABLE_MISUSE__SStuB.txt-00001-of-00300'`\r\n\r\nCould this be because of the files in my `dummy_data.zip`? I had to manually create it, and it looked like the test was looking for the following files, so I created the `.zip` with this structure:\r\n\r\n```\r\nArchive: dummy_data.zip\r\n creating: dummy_data/\r\n inflating: dummy_data/blobtest \r\n inflating: dummy_data/parsed.jsontrain \r\n inflating: dummy_data/parsed.jsonvalidation \r\n```", "Going to close this out and link to a new (cleaner) PR" ]
"2020-12-08T01:45:55Z"
"2020-12-09T00:46:15Z"
"2020-12-09T00:46:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1278.diff", "html_url": "https://github.com/huggingface/datasets/pull/1278", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1278.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1278" }
`craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1278/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1278/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3053/comments
https://api.github.com/repos/huggingface/datasets/issues/3053/events
https://github.com/huggingface/datasets/issues/3053
1,022,076,905
I_kwDODunzps4866fp
3,053
load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type
{ "avatar_url": "https://avatars.githubusercontent.com/u/3458792?v=4", "events_url": "https://api.github.com/users/davidbau/events{/privacy}", "followers_url": "https://api.github.com/users/davidbau/followers", "following_url": "https://api.github.com/users/davidbau/following{/other_user}", "gists_url": "https://api.github.com/users/davidbau/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidbau", "id": 3458792, "login": "davidbau", "node_id": "MDQ6VXNlcjM0NTg3OTI=", "organizations_url": "https://api.github.com/users/davidbau/orgs", "received_events_url": "https://api.github.com/users/davidbau/received_events", "repos_url": "https://api.github.com/users/davidbau/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidbau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidbau/subscriptions", "type": "User", "url": "https://api.github.com/users/davidbau" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "I encountered the same bug using different datasets.\r\nany suggestions?", "+1, can reproduce here!", "I get the same error\r\nPlatform: Windows 10\r\nPython: python 3.8.8\r\nPyArrow: 5.0", "I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value(\"int8\"))`, but the actual values can be well outside the max range for 8-bit integers.\r\n\r\nI worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes).", "Addressed in https://huggingface.co/datasets/the_pile_openwebtext2/discussions/4" ]
"2021-10-10T19:55:21Z"
"2023-02-24T14:02:20Z"
"2023-02-24T14:02:20Z"
NONE
null
null
null
## Describe the bug When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type` ## Steps to reproduce the bug ```python import datasets ds = datasets.load_dataset('the_pile_openwebtext2') ``` ## Expected results Should download the dataset, convert it to an arrow file, and return a working Dataset object. ## Actual results The download works, but conversion to the arrow file fails as follows: ``` >>> ds = datasets.load_dataset('the_pile_openwebtext2') Downloading and preparing dataset openwebtext2/plain_text (download: 27.33 GiB, generated: 63.86 GiB , post-processed: Unknown size, total: 91.19 GiB) to /home/davidbau/.cache/huggingface/datasets/open webtext2/plain_text/1.0.0/c48ec73ba3483bac673463f48f67e9a4fd8cb49a9d6ec4fb957f0b424b97cf25... Traceback (most recent call last): File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/builder.py", line 1133, in _prepare_split writer.write(example, key) File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 366, in write self.write_examples_on_file() File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 311, in write_examples_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 115, in __arrow_array__ out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type) File "pyarrow/array.pxi", line 305, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: ``` - Platform: Ubuntu 20.04 - Python version: python 3.9 - PyArrow version: 3.0.0
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3053/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3053/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2617/comments
https://api.github.com/repos/huggingface/datasets/issues/2617/events
https://github.com/huggingface/datasets/pull/2617
940,846,847
MDExOlB1bGxSZXF1ZXN0Njg2ODU3NzQz
2,617
Fix missing EOL issue in to_json for old versions of pandas
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
[]
"2021-07-09T15:05:45Z"
"2021-07-12T14:09:00Z"
"2021-07-09T15:28:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2617.diff", "html_url": "https://github.com/huggingface/datasets/pull/2617", "merged_at": "2021-07-09T15:28:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2617.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2617" }
Some versions of pandas don't add an EOL at the end of the output of `to_json`. Therefore users could end up having two samples in the same line Close https://github.com/huggingface/datasets/issues/2615
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2617/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2617/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2654/comments
https://api.github.com/repos/huggingface/datasets/issues/2654/events
https://github.com/huggingface/datasets/issues/2654
945,167,231
MDU6SXNzdWU5NDUxNjcyMzE=
2,654
Give a user feedback if the dataset he loads is streamable or not
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "#self-assign", "I understand it already raises a `NotImplementedError` exception, eg:\r\n\r\n```\r\n>>> dataset = load_dataset(\"journalists_questions\", name=\"plain_text\", split=\"train\", streaming=True)\r\n\r\n[...]\r\nNotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet\r\n```\r\n" ]
"2021-07-15T09:07:27Z"
"2021-08-02T11:03:21Z"
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive. **Describe alternatives you've considered** Add a new metadata tag for "streaming"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2654/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1780/comments
https://api.github.com/repos/huggingface/datasets/issues/1780/events
https://github.com/huggingface/datasets/pull/1780
793,882,132
MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy
1,780
Update SciFact URL
{ "avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4", "events_url": "https://api.github.com/users/dwadden/events{/privacy}", "followers_url": "https://api.github.com/users/dwadden/followers", "following_url": "https://api.github.com/users/dwadden/following{/other_user}", "gists_url": "https://api.github.com/users/dwadden/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwadden", "id": 3091916, "login": "dwadden", "node_id": "MDQ6VXNlcjMwOTE5MTY=", "organizations_url": "https://api.github.com/users/dwadden/orgs", "received_events_url": "https://api.github.com/users/dwadden/received_events", "repos_url": "https://api.github.com/users/dwadden/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwadden/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwadden/subscriptions", "type": "User", "url": "https://api.github.com/users/dwadden" }
[]
closed
false
null
[]
null
[ "Hi ! The error you get is the result of some verifications the library is doing when loading a dataset that already has some metadata in the dataset_infos.json. You can ignore the verifications with \r\n```\r\npython datasets-cli test datasets/scifact --save_infos --all_configs --ignore_verifications\r\n```\r\nThis will update the dataset_infos.json :)", "Nice, I ran that command and `dataset_infos` seems to have been updated appropriately; I added this to the PR. But when I try to load the dataset it still seems like it's getting a path to the old URL somehow. I `pip install -e`'d my fork of the repo, so I'm not sure why `load_dataset` is still looking for the old version of the file. Stack trace below.\r\n\r\n```\r\nIn [1]: import datasets\r\n\r\nIn [2]: ds = datasets.load_dataset(\"scifact\", \"claims\")\r\nDownloading: 7.34kB [00:00, 2.58MB/s]\r\nDownloading: 3.38kB [00:00, 1.36MB/s]\r\nDownloading and preparing dataset scifact/claims (download: 2.72 MiB, generated: 258.64 KiB, post-processed: Unknown size, total: 2.97 MiB) to /Users/dwadden/.cache/huggingface/datasets/scifact/claims/1.0.0/2bb675b2003716a061a4d8ce27fab32ab7f6d010016bab08ffaccea3c14ec6e7...\r\n---------------------------------------------------------------------------\r\nConnectionError Traceback (most recent call last)\r\n<ipython-input-2-9a50b954d89a> in <module>\r\n----> 1 ds = datasets.load_dataset(\"scifact\", \"claims\")\r\n\r\n~/proj/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 672\r\n 673 # Download and prepare data\r\n--> 674 builder_instance.download_and_prepare(\r\n 675 download_config=download_config,\r\n 676 download_mode=download_mode,\r\n\r\n~/proj/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 560 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 561 if not downloaded_from_gcs:\r\n--> 562 self._download_and_prepare(\r\n 563 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 564 )\r\n\r\n~/proj/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 616 split_dict = SplitDict(dataset_name=self.name)\r\n 617 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 618 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 619\r\n 620 # Checksums verification\r\n\r\n~/.cache/huggingface/modules/datasets_modules/datasets/scifact/2bb675b2003716a061a4d8ce27fab32ab7f6d010016bab08ffaccea3c14ec6e7/scifact.py in _split_generators(self, dl_manager)\r\n 92 # dl_manager is a datasets.download.DownloadManager that can be used to\r\n 93 # download and extract URLs\r\n---> 94 dl_dir = dl_manager.download_and_extract(_URL)\r\n 95\r\n 96 if self.config.name == \"corpus\":\r\n\r\n~/proj/datasets/src/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 256 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 257 \"\"\"\r\n--> 258 return self.extract(self.download(url_or_urls))\r\n 259\r\n 260 def get_recorded_sizes_checksums(self):\r\n\r\n~/proj/datasets/src/datasets/utils/download_manager.py in download(self, url_or_urls)\r\n 177\r\n 178 start_time = datetime.now()\r\n--> 179 downloaded_path_or_paths = map_nested(\r\n 180 download_func,\r\n 181 url_or_urls,\r\n\r\n~/proj/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 223 # Singleton\r\n 224 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 225 return function(data_struct)\r\n 226\r\n 227 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)\r\n\r\n~/proj/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 348 if is_remote_url(url_or_filename):\r\n 349 # URL, so get it from the cache (downloading if necessary)\r\n--> 350 output_path = get_from_cache(\r\n 351 url_or_filename,\r\n 352 cache_dir=cache_dir,\r\n\r\n~/proj/datasets/src/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries)\r\n 631 elif response is not None and response.status_code == 404:\r\n 632 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n--> 633 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 634\r\n 635 # Try a second time\r\n\r\nConnectionError: Couldn't reach https://ai2-s2-scifact.s3-us-west-2.amazonaws.com/release/2020-05-01/data.tar.gz\r\n```", "Hi ! This may be because you need to point `load_dataset` to the path of the dataset script that has the updated url:\r\n```python\r\nload_dataset(\"./datasets/scifact\", \"claims\")\r\n```\r\n\r\nIf you don't use a path to the updated script, then the old one is used by deffault", "Nice, I did\r\n```\r\nload_dataset(\"./datasets/scifact\", \"claims\")\r\n```\r\nand it worked. ", "One more question about the way the code is being preprocessed. The way I've formatted the data, each entry is a claim, which may be associated with multiple evidence documents (similar to FEVER):\r\n```\r\n# My way\r\n{'id': 70,\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence': {'5956380': [{'sentences': [5, 6], 'label': 'SUPPORT'}],\r\n '4414547': [{'sentences': [5], 'label': 'SUPPORT'}]},\r\n 'cited_doc_ids': [5956380, 4414547]}\r\n```\r\n\r\nIn the Hugginface data, each entry is a single claim / evidence document pair. So, the above entry is converted into two separate entries, like so:\r\n```\r\n# huggingface\r\n[{'cited_doc_ids': [5956380, 4414547],\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence_doc_id': '5956380',\r\n 'evidence_label': 'SUPPORT',\r\n 'evidence_sentences': [5, 6],\r\n 'id': 70},\r\n {'cited_doc_ids': [5956380, 4414547],\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence_doc_id': '4414547',\r\n 'evidence_label': 'SUPPORT',\r\n 'evidence_sentences': [5],\r\n 'id': 70}]\r\n```\r\n\r\nWas this done by design? If not, would you mind if I modify the Huggingface code so that it more closely matches the format that people will get if they download the data from the SciFact repo?", "Yes if you think the format is not convenient for training or evaluation we can change it.\r\nAlso I think we're doing something similar for FEVER: one example = one (claim, sentence) pair.\r\n\r\nLet's merge this PR first and then feel free to open a new PR to change the format :) ", "Thanks for merging!\r\n\r\nI don't have super-strong feelings one way or the other in terms of the data, I think it's probably fine. I may revisit later." ]
"2021-01-26T02:49:06Z"
"2021-01-28T18:48:00Z"
"2021-01-28T10:19:45Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1780.diff", "html_url": "https://github.com/huggingface/datasets/pull/1780", "merged_at": "2021-01-28T10:19:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/1780.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1780" }
Hi, I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset! Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/release/latest/data.tar.gz"`. I changed `scifact.py` appropriately and tried running ``` python datasets-cli test datasets/scifact --save_infos --all_configs ``` which I was hoping would update the `dataset_infos.json` for SciFact. But for some reason the code still seems to be looking for the old version of the dataset. Full stack trace below. I've tried to clear all my Huggingface-related caches, and I've `git grep`'d to make sure that the old path to the dataset isn't floating around somewhere. So I'm not sure why this is happening? Can you help me switch the download URL? ``` (datasets) $ python datasets-cli test datasets/scifact --save_infos --all_configs Checking datasets/scifact/scifact.py for additional imports. Found main folder for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact Found specific version folder for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534 Found script file from datasets/scifact/scifact.py to /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/scifact.py Found dataset infos file from datasets/scifact/dataset_infos.json to /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/dataset_infos.json Found metadata file for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/scifact.json Loading Dataset Infos from /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534 Testing builder 'corpus' (1/2) Generating dataset scifact (/Users/dwadden/.cache/huggingface/datasets/scifact/corpus/1.0.0/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534) Downloading and preparing dataset scifact/corpus (download: 2.72 MiB, generated: 7.63 MiB, post-processed: Unknown size, total: 10.35 MiB) to /Users/dwadden/.cache/huggingface/datasets/scifact/corpus/1.0.0/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534... Downloading took 0.0 min Checksum Computation took 0.0 min Traceback (most recent call last): File "/Users/dwadden/proj/datasets/datasets-cli", line 36, in <module> service.run() File "/Users/dwadden/proj/datasets/src/datasets/commands/test.py", line 139, in run builder.download_and_prepare( File "/Users/dwadden/proj/datasets/src/datasets/builder.py", line 562, in download_and_prepare self._download_and_prepare( File "/Users/dwadden/proj/datasets/src/datasets/builder.py", line 622, in _download_and_prepare verify_checksums( File "/Users/dwadden/proj/datasets/src/datasets/utils/info_utils.py", line 32, in verify_checksums raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https://ai2-s2-scifact.s3-us-west-2.amazonaws.com/release/2020-05-01/data.tar.gz'} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1780/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1780/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1708/comments
https://api.github.com/repos/huggingface/datasets/issues/1708/events
https://github.com/huggingface/datasets/issues/1708
781,631,455
MDU6SXNzdWU3ODE2MzE0NTU=
1,708
<html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
{ "avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4", "events_url": "https://api.github.com/users/Louiejay54/events{/privacy}", "followers_url": "https://api.github.com/users/Louiejay54/followers", "following_url": "https://api.github.com/users/Louiejay54/following{/other_user}", "gists_url": "https://api.github.com/users/Louiejay54/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Louiejay54", "id": 77126849, "login": "Louiejay54", "node_id": "MDQ6VXNlcjc3MTI2ODQ5", "organizations_url": "https://api.github.com/users/Louiejay54/orgs", "received_events_url": "https://api.github.com/users/Louiejay54/received_events", "repos_url": "https://api.github.com/users/Louiejay54/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Louiejay54/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Louiejay54/subscriptions", "type": "User", "url": "https://api.github.com/users/Louiejay54" }
[]
closed
false
null
[]
null
[]
"2021-01-07T21:45:24Z"
"2021-01-08T09:00:01Z"
"2021-01-08T09:00:01Z"
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1708/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/35
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/35/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/35/comments
https://api.github.com/repos/huggingface/datasets/issues/35/events
https://github.com/huggingface/datasets/pull/35
611,413,731
MDExOlB1bGxSZXF1ZXN0NDEyNjAyMTc0
35
[Tests] fix typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-05-03T13:23:49Z"
"2020-05-03T13:24:21Z"
"2020-05-03T13:24:20Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/35.diff", "html_url": "https://github.com/huggingface/datasets/pull/35", "merged_at": "2020-05-03T13:24:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/35.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/35" }
@lhoestq - currently the slow test fail with: ``` _____________________________________________________________________________________ DatasetTest.test_load_real_dataset_xnli _____________________________________________________________________________________ self = <tests.test_dataset_common.DatasetTest testMethod=test_load_real_dataset_xnli>, dataset_name = 'xnli' @slow def test_load_real_dataset(self, dataset_name): with tempfile.TemporaryDirectory() as temp_data_dir: > dataset = load(dataset_name, data_dir=temp_data_dir) tests/test_dataset_common.py:153: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../python_bin/nlp/load.py:497: in load dbuilder.download_and_prepare(**download_and_prepare_kwargs) ../../python_bin/nlp/builder.py:383: in download_and_prepare self._download_and_prepare(dl_manager=dl_manager, download_config=download_config) ../../python_bin/nlp/builder.py:627: in _download_and_prepare dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split, ../../python_bin/nlp/builder.py:431: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) ../../python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py:95: in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) ../../python_bin/nlp/utils/download_manager.py:246: in download_and_extract return self.extract(self.download(url_or_urls)) ../../python_bin/nlp/utils/download_manager.py:186: in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) ../../python_bin/nlp/utils/download_manager.py:166: in _record_sizes_checksums self._recorded_sizes_checksums[url] = get_size_checksum(path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ path = ('', '/tmp/tmpkajlg9yc/downloads/c0f7773c480a3f2d85639d777e0e17e65527460310d80760fd3fc2b2f2960556.c952a63cb17d3d46e412ceb7dbcd656ce2b15cc9ef17f50c28f81c48a7c853b5') def get_size_checksum(path: str) -> Tuple[int, str]: """Compute the file size and the sha256 checksum of a file""" m = sha256() > with open(path, "rb") as f: E TypeError: expected str, bytes or os.PathLike object, not tuple ../../python_bin/nlp/utils/checksums_utils.py:81: TypeError ``` - the checksums probably need to be updated no? And we should also think about how to write a test for the checksums.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/35/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/35/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4626/comments
https://api.github.com/repos/huggingface/datasets/issues/4626/events
https://github.com/huggingface/datasets/issues/4626
1,293,256,269
I_kwDODunzps5NFYZN
4,626
Add non-commercial licensing info for datasets for which we removed tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
[ "yep plus `license_details` also makes sense for this IMO" ]
"2022-07-04T14:32:43Z"
"2022-07-08T14:27:29Z"
null
MEMBER
null
null
null
We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753 Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c85de4eda5d152dfede7671491449cb/src/datasets/utils/resources/standard_licenses.tsv) We should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4626/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4626/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/668/comments
https://api.github.com/repos/huggingface/datasets/issues/668/events
https://github.com/huggingface/datasets/issues/668
708,310,956
MDU6SXNzdWU3MDgzMTA5NTY=
668
OverflowError when slicing with an array containing negative ids
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-09-24T16:27:14Z"
"2020-09-28T14:42:19Z"
"2020-09-28T14:42:19Z"
MEMBER
null
null
null
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[0]) # {'a': 0} print(d[-1]) # {'a': 9} print(d[[0, -1]]) # OverflowError ``` results in ``` --------------------------------------------------------------------------- OverflowError Traceback (most recent call last) <ipython-input-5-863dc3555598> in <module> ----> 1 d[[0, -1]] ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key) 1070 format_columns=self._format_columns, 1071 output_all_columns=self._output_all_columns, -> 1072 format_kwargs=self._format_kwargs, 1073 ) 1074 ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs) 1025 indices = key 1026 -> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64()) 1028 1029 # Check if we need to convert indices ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() OverflowError: can't convert negative value to unsigned int ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/668/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2053/comments
https://api.github.com/repos/huggingface/datasets/issues/2053/events
https://github.com/huggingface/datasets/pull/2053
831,151,728
MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2
2,053
Add bAbI QA tasks
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nShould I remove the 160 configurations? Is it too much?\r\n\r\nEDIT:\r\nCan you also check the task category? I'm not sure if there is an appropriate tag for the same.", "Thanks for the changes !\r\n\r\n> Should I remove the 160 configurations? Is it too much?\r\n\r\nYea 160 configuration is a lot.\r\nMaybe this dataset can work with parameters `type` and `task_no` ?\r\nYou can just remove the configuration in BUILDER_CONFIGS to only keep a few ones.\r\nAlso feel free to add an example in the dataset card of how to load the other configurations\r\n```\r\nload_dataset(\"babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nfor example, and with a list of the possible combinations.\r\n\r\n> Can you also check the task category? I'm not sure if there is an appropriate tag for the same.\r\n\r\nIt looks appropriate, thanks :)", "Hi @lhoestq \r\n\r\nI'm unable to test it locally using:\r\n```python\r\nload_dataset(\"datasets/babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nIt raises an error:\r\n```python\r\nTypeError: __init__() got an unexpected keyword argument 'type'\r\n```\r\nWill this be possible only after merging? Or am I missing something here?", "Can you try adding this class attribute to `BabiQa` ?\r\n```python\r\nBUILDER_CONFIG_CLASS = BabiQaConfig\r\n```\r\nThis should fix the TypeError issue you got", "My bad. Thanks a lot!", "Hi @lhoestq \r\n\r\nI have added the changes. Only the \"qa1\" task for each category is included. Also, I haven't removed the size categories and other description because I think it will still be useful. I have updated the line in README showing the example.\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nDoes this look good now?" ]
"2021-03-14T13:04:39Z"
"2021-03-29T12:41:48Z"
"2021-03-29T12:41:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2053.diff", "html_url": "https://github.com/huggingface/datasets/pull/2053", "merged_at": "2021-03-29T12:41:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2053.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2053" }
- **Name:** *The (20) QA bAbI tasks* - **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.* - **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf) - **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/) - **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research. **Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done. Thanks :) ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2053/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2053/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2246/comments
https://api.github.com/repos/huggingface/datasets/issues/2246/events
https://github.com/huggingface/datasets/pull/2246
864,220,031
MDExOlB1bGxSZXF1ZXN0NjIwNDg3OTUw
2,246
Faster map w/ input_columns & faster slicing w/ Iterable keys
{ "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/norabelrose", "id": 39116809, "login": "norabelrose", "node_id": "MDQ6VXNlcjM5MTE2ODA5", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "repos_url": "https://api.github.com/users/norabelrose/repos", "site_admin": false, "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "type": "User", "url": "https://api.github.com/users/norabelrose" }
[]
closed
false
null
[]
null
[ "@lhoestq Just fixed the code style issues— I think it should be good to merge now :)" ]
"2021-04-21T19:49:07Z"
"2021-04-26T16:13:59Z"
"2021-04-26T16:13:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2246.diff", "html_url": "https://github.com/huggingface/datasets/pull/2246", "merged_at": "2021-04-26T16:13:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/2246.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2246" }
@lhoestq Fixes #2193 - `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set - Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices. Together these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2246/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2246/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2500/comments
https://api.github.com/repos/huggingface/datasets/issues/2500/events
https://github.com/huggingface/datasets/pull/2500
920,471,411
MDExOlB1bGxSZXF1ZXN0NjY5NjE2MjQ1
2,500
Add load_dataset_builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "Hi @mariosasko, thanks for taking on this issue.\r\n\r\nJust a few logistic suggestions, as you are one of our most active contributors ❤️ :\r\n- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... 😉 \r\n- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.\r\n\r\nI hope you find these hints useful. 🤗 ", "@albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help.", "Ready for the review!\r\n\r\nOne additional change. I've modified the `camelcase_to_snakecase`/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase(\"__DummyDataset__\")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https://pypi.org/project/inflection/) library.\r\n", "Thank you for adding this feature, @mariosasko - this is really awesome!\r\n\r\nTried with:\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)\"\r\nUsing the latest cached version of the module from /home/stas/.cache/huggingface/modules/datasets_modules/datasets\r\n/openwebtext-10k/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12 \r\n20:22:53 2021) \r\n\r\nsince it couldn't be found locally at openwebtext-10k/openwebtext-10k.py \r\n\r\nor remotely (FileNotFoundError).\r\n\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nThe logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to? \r\n\r\n1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found \r\n2. I'm not sure why it says \"since it couldn't be found locally\" - as it is locally found at the cache folder and again what does \" locally at openwebtext-10k/openwebtext-10k.py\" mean - i.e. where does it look for it? Is it `./openwebtext-10k/openwebtext-10k.py` it's looking for? or in some specific dir?\r\n\r\nIf the cached version always supersedes any other versions perhaps this is what it should say?\r\n```\r\nfound cached version at xxx, not looking for a local at yyy, not downloading remote at zzz\r\n```", "Hi ! Thanks for the comments\r\n\r\nRegarding your last message:\r\nYou must pass `stas/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.\r\n\r\nWhen you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at ./openwebtext-10k/openwebtext-10k.py\r\nHere since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\nAs a fallback it managed to find the dataset script in your cache and it used this one.", "Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:\r\n\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('stas/openwebtext-10k'); print(b.cache_dir)\"\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nNow there is no logger message. Got it!\r\n\r\nOK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: \"incorrect input there is no such dataset as 'openwebtext-10k' at <this path> or <this url>\" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.\r\n\r\n> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\n\r\nExcept it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.\r\n\r\nPlus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?\r\n\r\n---------------\r\n\r\nFinally, the logger format is not set up so the user gets a warning w/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https://github.com/huggingface/datasets/pull/2500#issuecomment-874250500\r\n\r\ni.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error." ]
"2021-06-14T14:27:45Z"
"2021-07-09T00:08:16Z"
"2021-07-05T10:45:58Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2500.diff", "html_url": "https://github.com/huggingface/datasets/pull/2500", "merged_at": "2021-07-05T10:45:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/2500.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2500" }
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself. TODOs: - [x] Add docstring and entry in the docs - [x] Add tests Closes #2484
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2500/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2500/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2960/comments
https://api.github.com/repos/huggingface/datasets/issues/2960/events
https://github.com/huggingface/datasets/pull/2960
1,006,222,850
PR_kwDODunzps4sOl0Y
2,960
Support pandas 1.3 new `read_csv` parameters
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[]
closed
false
null
[]
null
[]
"2021-09-24T08:37:24Z"
"2021-09-24T11:22:31Z"
"2021-09-24T11:22:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2960.diff", "html_url": "https://github.com/huggingface/datasets/pull/2960", "merged_at": "2021-09-24T11:22:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/2960.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2960" }
Support two new arguments introduced in pandas v1.3.0: - `encoding_errors` - `on_bad_lines` `read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2960/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2960/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4219/comments
https://api.github.com/repos/huggingface/datasets/issues/4219/events
https://github.com/huggingface/datasets/pull/4219
1,214,934,025
PR_kwDODunzps42v6rE
4,219
Add F1 Metric Card
{ "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emibaylor", "id": 27527747, "login": "emibaylor", "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "repos_url": "https://api.github.com/users/emibaylor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "type": "User", "url": "https://api.github.com/users/emibaylor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-04-25T19:14:56Z"
"2022-04-26T20:44:18Z"
"2022-04-26T20:37:46Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4219.diff", "html_url": "https://github.com/huggingface/datasets/pull/4219", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4219.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4219" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4219/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4219/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/204/comments
https://api.github.com/repos/huggingface/datasets/issues/204/events
https://github.com/huggingface/datasets/pull/204
625,655,849
MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw
204
Add Dataflow support + Wikipedia + Wiki40b
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-05-27T12:32:49Z"
"2020-05-28T08:10:35Z"
"2020-05-28T08:10:34Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/204.diff", "html_url": "https://github.com/huggingface/datasets/pull/204", "merged_at": "2020-05-28T08:10:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/204.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/204" }
# Add Dataflow support + Wikipedia + Wiki40b ## Support datasets processing with Apache Beam Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc. To process such datasets with Beam, I added a command to run beam pipelines `nlp-cli run_beam path/to/dataset/script`. Then I used it to process the english + french wikipedia, and the english of wiki40b. The processed arrow files are on GCS and are the result of a Dataflow job. I added a markdown documentation file in `docs` that explains how to use it properly. ## Load already processed datasets Now that we have those datasets already processed, I made it possible to load datasets that are already processed. You can do `load_dataset('wikipedia', '20200501.en')` and it will download the processed files from the Hugging Face GCS directly into the user's cache and be ready to use ! The Wikipedia dataset was already asked in #187 and this PR should soon allow to add Natural Questions as asked in #129 ## Other changes in the code To make things work, I had to do a few adjustments: - add a `ship_files_with_pipeline` method to the `DownloadManager`. This is because beam pipelines can be run in the cloud and therefore need to have access to your downloaded data. I used it in the wikipedia script: ```python if not pipeline.is_local(): downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline) ``` - add parquet to arrow conversion. This is because the output of beam pipelines are parquet files so we need to convert them to arrow and have the arrow files on GCS - add a test script with a dummy beam dataset - minor adjustments to allow read/write operations on remote files using `apache_beam.io.filesystems.FileSystems` if we want (it can be connected to gcp, s3, hdfs, etc...)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/204/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/204/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1126/comments
https://api.github.com/repos/huggingface/datasets/issues/1126/events
https://github.com/huggingface/datasets/pull/1126
757,197,735
MDExOlB1bGxSZXF1ZXN0NTMyNjEzNzcw
1,126
Adding babi dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "This is ok now @lhoestq\r\n\r\nI've included the tweak to `dummy_data` to only use the data transmitted to `_generate_examples` by default (it only do that if it can find at least one path to an existing file in the `gen_kwargs` and this can be unactivated with a flag).\r\n\r\nShould I extract it in another PR or is it ok like this?", "Nice !\r\nCould you add the dummy data generation trick in another PR ?\r\nI think we can also extend it to make it work not only with data files paths but also with data directories (sometimes it's one of the parent directory that is passed to gen_kwargs, not the actual path to the file).\r\nThis will help a lot to make the dummy data lighter !", "This PR can be closed due to #2053 @lhoestq\r\n\r\n" ]
"2020-12-04T15:42:34Z"
"2021-03-30T09:44:04Z"
"2021-03-30T09:44:04Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1126.diff", "html_url": "https://github.com/huggingface/datasets/pull/1126", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1126.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1126" }
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment. Supersede #945 (problem with the rebase) and adresses the issues mentioned in the review (dummy data are smaller now and code comments are fixed).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1126/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1126/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5422/comments
https://api.github.com/repos/huggingface/datasets/issues/5422/events
https://github.com/huggingface/datasets/issues/5422
1,533,385,239
I_kwDODunzps5bZZoX
5,422
Datasets load error for saved github issues
{ "avatar_url": "https://avatars.githubusercontent.com/u/7360564?v=4", "events_url": "https://api.github.com/users/folterj/events{/privacy}", "followers_url": "https://api.github.com/users/folterj/followers", "following_url": "https://api.github.com/users/folterj/following{/other_user}", "gists_url": "https://api.github.com/users/folterj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/folterj", "id": 7360564, "login": "folterj", "node_id": "MDQ6VXNlcjczNjA1NjQ=", "organizations_url": "https://api.github.com/users/folterj/orgs", "received_events_url": "https://api.github.com/users/folterj/received_events", "repos_url": "https://api.github.com/users/folterj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/folterj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/folterj/subscriptions", "type": "User", "url": "https://api.github.com/users/folterj" }
[]
open
false
null
[]
null
[ "I can confirm that the error exists!\r\nI'm trying to read 3 parquet files locally:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.parquet\"),\r\n \"validation\": os.path.join(sentiment_analysis_data_path, \"validation.parquet\"),\r\n \"test\": os.path.join(sentiment_analysis_data_path, \"test.parquet\"),\r\n },\r\n)\r\n```\r\n\r\nBut you can fix it, by specifying `features` for `load_dataset()` function like this:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nfeatures = Features(\r\n {\r\n \"label\": ClassLabel(\r\n num_classes=3,\r\n names=[\"negative\", \"neutral\", \"positive\"],\r\n ),\r\n \"text\": Value(dtype=\"string\"),\r\n }\r\n)\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.parquet\"),\r\n \"validation\": os.path.join(sentiment_analysis_data_path, \"validation.parquet\"),\r\n \"test\": os.path.join(sentiment_analysis_data_path, \"test.parquet\"),\r\n },\r\n features=features,\r\n)\r\n\r\nprint(review_dataset)\r\n```", "@Extremesarova I think this is a different issue, but understand using features could be a work-around.\r\nIt seems the field `closed_at` is `null` in many cases.\r\n\r\nI've not found a way to specify only a single feature without (succesfully) specifiying the full and quite detailed set of expected features. Using this features set gives an error the column names don't match.\r\n`features = Features({'closed_at': Value(dtype='timestamp[s]', id=None)})`\r\n\r\n", "Found this when searching for the same error, looks like based on #3965 it's just an issue with the data. I found that changing `df = pd.DataFrame.from_records(all_issues)` to `df = pd.DataFrame.from_records(all_issues).dropna(axis=1, how='all').drop(['milestone'], axis=1)` from the fetch_issues function fixed the issue. \r\nThe \"milestone\" column seemed to be problematic (only ~50 non null rows) and dropped any columns that were all null as well just in case.", "I have this same issue. I saved a dataset to disk and now I can't load it.", "Ok the solution was to use load_from_disk instead of load_dataset.", "Hi @folterj , I faced same issue while creating `issues_dataset` (https://huggingface.co/learn/nlp-course/chapter5/5?fw=pt). The fix which worked for me was loading the *.jsonl file as pd.read_json and then converting it into a Dataset using datasets API.\r\n```\r\nimport pandas as pd\r\ndf=pd.read_json(\"datasets-issues.jsonl\", lines=True)\r\ndf.head()\r\n\r\nfrom datasets import Dataset\r\nissues_dataset = Dataset.from_pandas(df)\r\nissues_dataset\r\nsample = issues_dataset.shuffle(seed=666).select(range(3))\r\nsample[0]\r\n```", "I understand some work-around suggestions would be to not use load_dataset(), and instead using a different API function. Another alternative would be using the same function using streaming, as I had already suggested in my original post.\r\nHowever, the fact remains that there is an issue in this function as reported." ]
"2023-01-14T17:29:38Z"
"2023-09-14T11:39:57Z"
null
NONE
null
null
null
### Describe the bug Loading a previously downloaded & saved dataset as described in the HuggingFace course: issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") Gives this error: datasets.builder.DatasetGenerationError: An error occurred while generating the dataset A work-around I found was to use streaming. ### Steps to reproduce the bug Reproduce by executing the code provided: https://huggingface.co/course/chapter5/5?fw=pt From the heading: 'let’s create a function that can download all the issues from a GitHub repository' ### Expected behavior No error ### Environment info Datasets version 2.8.0. Note that version 2.6.1 gives the same error (related to null timestamp). **[EDIT]** This is the complete error trace confirming the issue is related to the timestamp (`Couldn't cast array of type timestamp[s] to null`) ``` Using custom data configuration default-950028611d2860c8 Downloading and preparing dataset json/default to [...]/.cache/huggingface/datasets/json/default-950028611d2860c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100%|██████████| 1/1 [00:00<?, ?it/s] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 500.63it/s] Generating train split: 2619 examples [00:00, 7155.72 examples/s]Traceback (most recent call last): File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1831, in _prepare_split_single writer.write_table(table) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\arrow_writer.py", line 567, in write_table pa_table = table_cast(pa_table, self._schema) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2282, in table_cast return cast_table_to_schema(table, schema) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper return func(array, *args, **kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2101, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper return func(array, *args, **kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1990, in array_cast raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") TypeError: Couldn't cast array of type timestamp[s] to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode coro = func() File "<input>", line 1, in <module> File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "[...]\PycharmProjects\TransformersTesting\dataset_issues.py", line 20, in <module> issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\load.py", line 1757, in load_dataset builder_instance.download_and_prepare( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 860, in download_and_prepare self._download_and_prepare( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 953, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1706, in _prepare_split for job_id, done, content in self._prepare_split_single( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1849, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset Generating train split: 2619 examples [00:19, 7155.72 examples/s] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5422/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5422/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/548/comments
https://api.github.com/repos/huggingface/datasets/issues/548/events
https://github.com/huggingface/datasets/pull/548
689,285,996
MDExOlB1bGxSZXF1ZXN0NDc2MzYzMjU1
548
[Breaking] Switch text loading to multi-threaded PyArrow loading
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` tag no ? Apparently we can get this tag with `os.path.getmtime(path)`", "I just rebased from master to include the hashing changes from #573 ", "I think this is ready to merge, no?", "Indeed it's ready to merge :)", "Ok added the breaking change info and we can merge indeed.\r\n" ]
"2020-08-31T15:15:41Z"
"2020-09-08T10:19:58Z"
"2020-09-08T10:19:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/548.diff", "html_url": "https://github.com/huggingface/datasets/pull/548", "merged_at": "2020-09-08T10:19:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/548.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/548" }
Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader. If it works ok, it would fix #546. **Breaking change**: The text lines now do not include final line-breaks anymore.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/548/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1513/comments
https://api.github.com/repos/huggingface/datasets/issues/1513/events
https://github.com/huggingface/datasets/pull/1513
764,016,850
MDExOlB1bGxSZXF1ZXN0NTM4MjgzNDUz
1,513
app_reviews_by_users
{ "avatar_url": "https://avatars.githubusercontent.com/u/44197177?v=4", "events_url": "https://api.github.com/users/darshan-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/darshan-gandhi/followers", "following_url": "https://api.github.com/users/darshan-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/darshan-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/darshan-gandhi", "id": 44197177, "login": "darshan-gandhi", "node_id": "MDQ6VXNlcjQ0MTk3MTc3", "organizations_url": "https://api.github.com/users/darshan-gandhi/orgs", "received_events_url": "https://api.github.com/users/darshan-gandhi/received_events", "repos_url": "https://api.github.com/users/darshan-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/darshan-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darshan-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/darshan-gandhi" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq \r\n\r\nI have added the readme file as well, please if you could check it once \r\n\r\nThank you " ]
"2020-12-12T16:23:49Z"
"2020-12-14T20:45:24Z"
"2020-12-14T20:45:24Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1513.diff", "html_url": "https://github.com/huggingface/datasets/pull/1513", "merged_at": "2020-12-14T20:45:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1513.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1513" }
Software Applications User Reviews
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1513/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1513/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1492/comments
https://api.github.com/repos/huggingface/datasets/issues/1492/events
https://github.com/huggingface/datasets/pull/1492
762,965,239
MDExOlB1bGxSZXF1ZXN0NTM3NDYxMjc3
1,492
OPUS UBUNTU dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22396042?v=4", "events_url": "https://api.github.com/users/rkc007/events{/privacy}", "followers_url": "https://api.github.com/users/rkc007/followers", "following_url": "https://api.github.com/users/rkc007/following{/other_user}", "gists_url": "https://api.github.com/users/rkc007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rkc007", "id": 22396042, "login": "rkc007", "node_id": "MDQ6VXNlcjIyMzk2MDQy", "organizations_url": "https://api.github.com/users/rkc007/orgs", "received_events_url": "https://api.github.com/users/rkc007/received_events", "repos_url": "https://api.github.com/users/rkc007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rkc007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rkc007/subscriptions", "type": "User", "url": "https://api.github.com/users/rkc007" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
"2020-12-11T22:01:37Z"
"2020-12-17T14:38:16Z"
"2020-12-17T14:38:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1492.diff", "html_url": "https://github.com/huggingface/datasets/pull/1492", "merged_at": "2020-12-17T14:38:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/1492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1492" }
Dataset : http://opus.nlpl.eu/Ubuntu.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1492/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4693/comments
https://api.github.com/repos/huggingface/datasets/issues/4693/events
https://github.com/huggingface/datasets/pull/4693
1,306,788,322
PR_kwDODunzps47go-F
4,693
update `samsum` script
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "We are closing PRs to dataset scripts because we are moving them to the Hub.\r\n\r\nThanks anyway.\r\n\r\n" ]
"2022-07-16T11:53:05Z"
"2022-09-23T11:40:11Z"
"2022-09-23T11:37:57Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4693.diff", "html_url": "https://github.com/huggingface/datasets/pull/4693", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4693" }
update `samsum` script after #4672 was merged (citation is also updated)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4693/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4693/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5015/comments
https://api.github.com/repos/huggingface/datasets/issues/5015/events
https://github.com/huggingface/datasets/issues/5015
1,383,485,558
I_kwDODunzps5SdlB2
5,015
Transfer dataset scripts to Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Sounds good ! Can I help with anything ?" ]
"2022-09-23T08:48:10Z"
"2022-10-05T07:15:57Z"
"2022-10-05T07:15:57Z"
MEMBER
null
null
null
Before merging: - #4974 TODO: - [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22) - [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/) - [x] PRs: - [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub - [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub - [ ] Issues Finally: - [x] #4974 Let me know what you think! :hugs:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5015/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5015/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/565/comments
https://api.github.com/repos/huggingface/datasets/issues/565/events
https://github.com/huggingface/datasets/issues/565
691,039,121
MDU6SXNzdWU2OTEwMzkxMjE=
565
No module named 'nlp.logging'
{ "avatar_url": "https://avatars.githubusercontent.com/u/66633754?v=4", "events_url": "https://api.github.com/users/melody-ju/events{/privacy}", "followers_url": "https://api.github.com/users/melody-ju/followers", "following_url": "https://api.github.com/users/melody-ju/following{/other_user}", "gists_url": "https://api.github.com/users/melody-ju/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/melody-ju", "id": 66633754, "login": "melody-ju", "node_id": "MDQ6VXNlcjY2NjMzNzU0", "organizations_url": "https://api.github.com/users/melody-ju/orgs", "received_events_url": "https://api.github.com/users/melody-ju/received_events", "repos_url": "https://api.github.com/users/melody-ju/repos", "site_admin": false, "starred_url": "https://api.github.com/users/melody-ju/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/melody-ju/subscriptions", "type": "User", "url": "https://api.github.com/users/melody-ju" }
[]
closed
false
null
[]
null
[ "Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder from github ([this one](https://github.com/huggingface/nlp/tree/0.4.0/metrics/bleurt)) and do\r\n\r\n```python\r\nfrom nlp import load_metric\r\n\r\nbleurt = load_metric(\"path/to/bleurt/folder\")\r\n```\r\n\r\nTo download it you can either clone the repo or download the `bleurt.py` file and place it in a folder named `bleurt` ", "Actually we can fix this on our side, this script didn't had to be updated. I'll do it in a few minutes" ]
"2020-09-02T13:49:50Z"
"2020-09-03T07:29:50Z"
"2020-09-03T07:29:50Z"
NONE
null
null
null
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 >>> bleurt = nlp.load_metric("bleurt") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 443, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 61, in import_main_class module = importlib.import_module(module_path) File "/home/melody/anaconda3/envs/transformers/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/metrics/bleurt/43448cf2959ea81d3ae0e71c5c8ee31dc15eed9932f197f5f50673cbcecff2b5/bleurt.py", line 20, in <module> from nlp.logging import get_logger ModuleNotFoundError: No module named 'nlp.logging' ``` Just to show once again that I can't import the logging module: ``` >>> import nlp 2020-09-02 13:48:38.190621: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 >>> nlp.__version__ '0.4.0' >>> from nlp.logging import get_logger Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'nlp.logging' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/565/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/565/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5263/comments
https://api.github.com/repos/huggingface/datasets/issues/5263/events
https://github.com/huggingface/datasets/issues/5263
1,455,252,626
I_kwDODunzps5WvWSS
5,263
Save a dataset in a determined number of shards
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
"2022-11-18T14:43:54Z"
"2022-12-14T18:22:59Z"
"2022-12-14T18:22:59Z"
MEMBER
null
null
null
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5263/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5263/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4293/comments
https://api.github.com/repos/huggingface/datasets/issues/4293/events
https://github.com/huggingface/datasets/pull/4293
1,228,815,477
PR_kwDODunzps43dRt9
4,293
Fix wrong map parameter name in cache docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4", "events_url": "https://api.github.com/users/h4iku/events{/privacy}", "followers_url": "https://api.github.com/users/h4iku/followers", "following_url": "https://api.github.com/users/h4iku/following{/other_user}", "gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/h4iku", "id": 3812788, "login": "h4iku", "node_id": "MDQ6VXNlcjM4MTI3ODg=", "organizations_url": "https://api.github.com/users/h4iku/orgs", "received_events_url": "https://api.github.com/users/h4iku/received_events", "repos_url": "https://api.github.com/users/h4iku/repos", "site_admin": false, "starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h4iku/subscriptions", "type": "User", "url": "https://api.github.com/users/h4iku" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-08T07:27:46Z"
"2022-06-14T16:49:00Z"
"2022-06-14T16:07:00Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4293.diff", "html_url": "https://github.com/huggingface/datasets/pull/4293", "merged_at": "2022-06-14T16:07:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4293.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4293" }
The `load_from_cache` parameter of `map` should be `load_from_cache_file`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4293/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4293/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2896/comments
https://api.github.com/repos/huggingface/datasets/issues/2896/events
https://github.com/huggingface/datasets/pull/2896
993,613,113
MDExOlB1bGxSZXF1ZXN0NzMxNzcwMTE3
2,896
add multi-proc in `to_csv`
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[ "I think you can just add a test `test_dataset_to_csv_multiproc` in `tests/io/test_csv.py` and we'll be good", "Hi @lhoestq, \r\nI've added `test_dataset_to_csv` apart from `test_dataset_to_csv_multiproc` as no test was there to check generated CSV file when `num_proc=1`. Please let me know if anything is also required! " ]
"2021-09-10T21:35:09Z"
"2021-10-28T05:47:33Z"
"2021-10-26T16:00:42Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2896.diff", "html_url": "https://github.com/huggingface/datasets/pull/2896", "merged_at": "2021-10-26T16:00:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2896.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2896" }
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_size 425.6553490161896 Time taken on 1 num_proc, 50000 batch_size 623.5897650718689 Time taken on 4 num_proc, 50000 batch_size 380.0402421951294 Time taken on 4 num_proc, 100000 batch_size 361.7168130874634 ``` This is a WIP as writing tests is pending for this PR. I'm also exploring [this](https://arrow.apache.org/docs/python/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2896/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2896/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5004/comments
https://api.github.com/repos/huggingface/datasets/issues/5004/events
https://github.com/huggingface/datasets/pull/5004
1,380,860,606
PR_kwDODunzps4_WQck
5,004
Remove license tag file and validation
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-09-21T12:35:14Z"
"2022-09-22T11:47:41Z"
"2022-09-22T11:45:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5004.diff", "html_url": "https://github.com/huggingface/datasets/pull/5004", "merged_at": "2022-09-22T11:45:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/5004.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5004" }
As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub. Fix #4994. Related to: - #4926, which is removing all the validation from `datasets`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5004/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5004/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5163/comments
https://api.github.com/repos/huggingface/datasets/issues/5163/events
https://github.com/huggingface/datasets/pull/5163
1,422,540,337
PR_kwDODunzps5BgQxp
5,163
Reduce default max `writer_batch_size`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-10-25T14:14:52Z"
"2022-10-27T12:19:27Z"
"2022-10-27T12:16:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5163.diff", "html_url": "https://github.com/huggingface/datasets/pull/5163", "merged_at": "2022-10-27T12:16:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/5163.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5163" }
Reduce the default writer_batch_size from 10k to 1k examples. Additionally, align the default values of `batch_size` and `writer_batch_size` in `Dataset.cast` with the values from the corresponding docstring.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5163/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5163/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6492/comments
https://api.github.com/repos/huggingface/datasets/issues/6492/events
https://github.com/huggingface/datasets/pull/6492
2,037,987,267
PR_kwDODunzps5hzjhQ
6,492
Make push_to_hub return CommitInfo
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6492). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "This PR is ready to review @huggingface/datasets.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005093 / 0.011353 (-0.006259) | 0.003695 / 0.011008 (-0.007313) | 0.064648 / 0.038508 (0.026140) | 0.054677 / 0.023109 (0.031568) | 0.242007 / 0.275898 (-0.033891) | 0.265216 / 0.323480 (-0.058264) | 0.003847 / 0.007986 (-0.004138) | 0.003773 / 0.004328 (-0.000556) | 0.048595 / 0.004250 (0.044345) | 0.038122 / 0.037052 (0.001070) | 0.245698 / 0.258489 (-0.012791) | 0.278095 / 0.293841 (-0.015746) | 0.027488 / 0.128546 (-0.101058) | 0.011002 / 0.075646 (-0.064644) | 0.211443 / 0.419271 (-0.207829) | 0.035664 / 0.043533 (-0.007869) | 0.244754 / 0.255139 (-0.010385) | 0.261078 / 0.283200 (-0.022121) | 0.017768 / 0.141683 (-0.123915) | 1.130765 / 1.452155 (-0.321390) | 1.189825 / 1.492716 (-0.302891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093027 / 0.018006 (0.075021) | 0.302193 / 0.000490 (0.301703) | 0.000207 / 0.000200 (0.000007) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018413 / 0.037411 (-0.018999) | 0.062715 / 0.014526 (0.048190) | 0.073287 / 0.176557 (-0.103269) | 0.120394 / 0.737135 (-0.616741) | 0.077573 / 0.296338 (-0.218765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284445 / 0.215209 (0.069236) | 2.780718 / 2.077655 (0.703063) | 1.460988 / 1.504120 (-0.043132) | 1.345799 / 1.541195 (-0.195395) | 1.399892 / 1.468490 (-0.068598) | 0.576051 / 4.584777 (-4.008726) | 2.418792 / 3.745712 (-1.326921) | 2.901330 / 5.269862 (-2.368532) | 1.765083 / 4.565676 (-2.800593) | 0.063555 / 0.424275 (-0.360720) | 0.004991 / 0.007607 (-0.002616) | 0.339657 / 0.226044 (0.113613) | 3.372963 / 2.268929 (1.104034) | 1.853667 / 55.444624 (-53.590958) | 1.552022 / 6.876477 (-5.324454) | 1.616452 / 2.142072 (-0.525620) | 0.652309 / 4.805227 (-4.152919) | 0.121125 / 6.500664 (-6.379539) | 0.042420 / 0.075469 (-0.033049) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954514 / 1.841788 (-0.887274) | 11.853736 / 8.074308 (3.779428) | 10.624571 / 10.191392 (0.433179) | 0.134118 / 0.680424 (-0.546306) | 0.014200 / 0.534201 (-0.520001) | 0.290106 / 0.579283 (-0.289177) | 0.270637 / 0.434364 (-0.163727) | 0.336155 / 0.540337 (-0.204182) | 0.443962 / 1.386936 (-0.942974) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005282 / 0.011353 (-0.006071) | 0.003526 / 0.011008 (-0.007482) | 0.048994 / 0.038508 (0.010486) | 0.055345 / 0.023109 (0.032236) | 0.271587 / 0.275898 (-0.004311) | 0.294676 / 0.323480 (-0.028804) | 0.003989 / 0.007986 (-0.003996) | 0.002594 / 0.004328 (-0.001735) | 0.048310 / 0.004250 (0.044059) | 0.039945 / 0.037052 (0.002893) | 0.277304 / 0.258489 (0.018815) | 0.312017 / 0.293841 (0.018176) | 0.028364 / 0.128546 (-0.100182) | 0.010683 / 0.075646 (-0.064963) | 0.057990 / 0.419271 (-0.361281) | 0.032418 / 0.043533 (-0.011115) | 0.273835 / 0.255139 (0.018697) | 0.288585 / 0.283200 (0.005385) | 0.018964 / 0.141683 (-0.122719) | 1.148863 / 1.452155 (-0.303292) | 1.195684 / 1.492716 (-0.297032) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091967 / 0.018006 (0.073960) | 0.303236 / 0.000490 (0.302747) | 0.000214 / 0.000200 (0.000015) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021960 / 0.037411 (-0.015452) | 0.068744 / 0.014526 (0.054218) | 0.081167 / 0.176557 (-0.095390) | 0.119623 / 0.737135 (-0.617513) | 0.084965 / 0.296338 (-0.211373) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297740 / 0.215209 (0.082531) | 2.924856 / 2.077655 (0.847201) | 1.602080 / 1.504120 (0.097960) | 1.494083 / 1.541195 (-0.047112) | 1.544662 / 1.468490 (0.076172) | 0.581212 / 4.584777 (-4.003565) | 2.451064 / 3.745712 (-1.294648) | 2.875213 / 5.269862 (-2.394649) | 1.780777 / 4.565676 (-2.784900) | 0.063751 / 0.424275 (-0.360524) | 0.004967 / 0.007607 (-0.002641) | 0.350321 / 0.226044 (0.124276) | 3.449585 / 2.268929 (1.180657) | 1.977666 / 55.444624 (-53.466958) | 1.685125 / 6.876477 (-5.191351) | 1.734466 / 2.142072 (-0.407606) | 0.657477 / 4.805227 (-4.147750) | 0.116767 / 6.500664 (-6.383898) | 0.041400 / 0.075469 (-0.034069) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985751 / 1.841788 (-0.856037) | 12.300065 / 8.074308 (4.225756) | 10.608238 / 10.191392 (0.416846) | 0.139907 / 0.680424 (-0.540517) | 0.015379 / 0.534201 (-0.518822) | 0.283528 / 0.579283 (-0.295755) | 0.278751 / 0.434364 (-0.155613) | 0.328811 / 0.540337 (-0.211527) | 0.584041 / 1.386936 (-0.802895) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef0f986518bd252c5314a7e3a419dedcbb166630 \"CML watermark\")\n" ]
"2023-12-12T15:18:16Z"
"2023-12-13T14:29:01Z"
"2023-12-13T14:22:41Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6492.diff", "html_url": "https://github.com/huggingface/datasets/pull/6492", "merged_at": "2023-12-13T14:22:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/6492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6492" }
Make `push_to_hub` return `CommitInfo`. This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID. CC: @severo for the use case in https://huggingface.co/datasets/jmhessel/newyorker_caption_contest/discussions/4
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6492/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5696/comments
https://api.github.com/repos/huggingface/datasets/issues/5696/events
https://github.com/huggingface/datasets/issues/5696
1,651,707,008
I_kwDODunzps5icwyA
5,696
Shuffle a sharded iterable dataset without seed can lead to duplicate data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
"2023-04-03T09:40:03Z"
"2023-04-04T14:58:18Z"
"2023-04-04T14:58:18Z"
MEMBER
null
null
null
As reported in https://github.com/huggingface/datasets/issues/5360 If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes. Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead of exactly one. This can happen only when you have a number of shards that is a factor of the number of nodes. The current workaround is to always set a `seed` in `.shuffle()`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5696/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1275/comments
https://api.github.com/repos/huggingface/datasets/issues/1275/events
https://github.com/huggingface/datasets/pull/1275
758,958,066
MDExOlB1bGxSZXF1ZXN0NTM0MDM2NjIw
1,275
Yoruba GV NER added
{ "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dadelani", "id": 23586676, "login": "dadelani", "node_id": "MDQ6VXNlcjIzNTg2Njc2", "organizations_url": "https://api.github.com/users/dadelani/orgs", "received_events_url": "https://api.github.com/users/dadelani/received_events", "repos_url": "https://api.github.com/users/dadelani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "type": "User", "url": "https://api.github.com/users/dadelani" }
[]
closed
false
null
[]
null
[ "Thank you. Okay, I will add the dataset card." ]
"2020-12-08T00:31:38Z"
"2020-12-08T23:25:28Z"
"2020-12-08T23:25:28Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1275.diff", "html_url": "https://github.com/huggingface/datasets/pull/1275", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1275.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1275" }
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1275/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4845/comments
https://api.github.com/repos/huggingface/datasets/issues/4845/events
https://github.com/huggingface/datasets/pull/4845
1,337,928,283
PR_kwDODunzps49IOjf
4,845
Mark CI tests as xfail if Hub HTTP error
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-08-13T10:45:11Z"
"2022-08-23T04:57:12Z"
"2022-08-23T04:42:26Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4845.diff", "html_url": "https://github.com/huggingface/datasets/pull/4845", "merged_at": "2022-08-23T04:42:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/4845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4845" }
In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors. This PR: - marks tests as xfailed only if the Hub raises a 500 error for: - test_upstream_hub - makes pytest report the xfailed/xpassed tests. More tests could also be marked if needed. Examples of CI failures due to temporary Hub HTTP errors: - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files - https://github.com/huggingface/datasets/runs/7806855399?check_suite_focus=true `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-16603108028233/commit/main (Request ID: aZeAQ5yLktoGHQYBcJ3zo)` - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_no_token - https://github.com/huggingface/datasets/runs/7840022996?check_suite_focus=true `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://s3.us-east-1.amazonaws.com/lfs-staging.huggingface.co/repos/81/e3/81e3b831fa9bf23190ec041f26ef7ff6d6b71c1a937b8ec1ef1f1f05b508c089/caae596caa179cf45e7c9ac0c6d9a9cb0fe2d305291bfbb2d8b648ae26ed38b6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20220815%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220815T144713Z&X-Amz-Expires=900&X-Amz-Signature=5ddddfe8ef2b0601e80ab41c78a4d77d921942b0d8160bcab40ff894095e6823&X-Amz-SignedHeaders=host&x-id=PutObject` - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private - https://github.com/huggingface/datasets/runs/7835921082?check_suite_focus=true `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/repos/create (Request ID: gL_1I7i2dii9leBhlZen-) - Internal Error - We're working hard to fix that as soon as possible!` - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_image_list - https://github.com/huggingface/datasets/runs/7835920900?check_suite_focus=true - This is not 500, but 404: `requests.exceptions.HTTPError: 404 Client Error: Not Found for url: [https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects](https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects/batch)`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4845/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4845/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1980/comments
https://api.github.com/repos/huggingface/datasets/issues/1980/events
https://github.com/huggingface/datasets/pull/1980
821,312,810
MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy
1,980
Loading all answers from drop
{ "avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4", "events_url": "https://api.github.com/users/KaijuML/events{/privacy}", "followers_url": "https://api.github.com/users/KaijuML/followers", "following_url": "https://api.github.com/users/KaijuML/following{/other_user}", "gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KaijuML", "id": 25499439, "login": "KaijuML", "node_id": "MDQ6VXNlcjI1NDk5NDM5", "organizations_url": "https://api.github.com/users/KaijuML/orgs", "received_events_url": "https://api.github.com/users/KaijuML/received_events", "repos_url": "https://api.github.com/users/KaijuML/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions", "type": "User", "url": "https://api.github.com/users/KaijuML" }
[]
closed
false
null
[]
null
[ "Nice thanks for the change !\r\nThis looks all good to me\r\n\r\nBefore we merge can you just update the dataset_infos.json file of drop ? You can do it by running\r\n```\r\ndatasets-cli test ./datasets/drop --all_configs --save_infos --ignore_verifications\r\n```", "Done!" ]
"2021-03-03T17:13:07Z"
"2021-03-15T11:27:26Z"
"2021-03-15T11:27:26Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1980.diff", "html_url": "https://github.com/huggingface/datasets/pull/1980", "merged_at": "2021-03-15T11:27:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1980.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1980" }
Hello all, I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date"). I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files. Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them. Let me know if there is anything else I can do, Clément
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1980/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3055/comments
https://api.github.com/repos/huggingface/datasets/issues/3055/events
https://github.com/huggingface/datasets/issues/3055
1,022,319,238
I_kwDODunzps4871qG
3,055
CI test suite fails after meteor metric update
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2021-10-11T06:37:12Z"
"2021-10-11T07:30:31Z"
"2021-10-11T07:30:31Z"
MEMBER
null
null
null
## Describe the bug CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010 Stack trace: ``` ___________________ LocalMetricTest.test_load_metric_meteor ____________________ [gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 self = <tests.test_metric_common.LocalMetricTest testMethod=test_load_metric_meteor> metric_name = 'meteor' def test_load_metric(self, metric_name): doctest.ELLIPSIS_MARKER = "[...]" metric_module = importlib.import_module(datasets.load.prepare_module(os.path.join("metrics", metric_name))[0]) metric = datasets.load.import_main_class(metric_module.__name__, dataset=False) # check parameters parameters = inspect.signature(metric._compute).parameters self.assertTrue("predictions" in parameters) self.assertTrue("references" in parameters) self.assertTrue(all([p.kind != p.VAR_KEYWORD for p in parameters.values()])) # no **kwargs # run doctest with self.patch_intensive_calls(metric_name, metric_module.__name__): with self.use_local_metrics(): > results = doctest.testmod(metric_module, verbose=True, raise_on_error=True) tests/test_metric_common.py:75: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1951: in testmod runner.run(test) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1839: in run r = DocTestRunner.run(self, test, compileflags, out, False) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1476: in run return self.__run(test, compileflags, out) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1382: in __run exception) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <doctest.DebugRunner object at 0x7f4c26bd3da0> out = <built-in method write of _io.TextIOWrapper object at 0x7f51a21852d0> test = <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Mete...ets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)> example = <doctest.Example object at 0x7f4c26bd3eb8> exc_info = (<class 'TypeError'>, TypeError('"hypothesis" expects pre-tokenized hypothesis (Iterable[str]): It is a guide to action which ensures that the military always obeys the commands of the party',), <traceback object at 0x7f4cd01afec8>) def report_unexpected_exception(self, out, test, example, exc_info): > raise UnexpectedException(test, example, exc_info) E doctest.UnexpectedException: <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Meteor from /tmp/pytest-of-circleci/pytest-0/popen-gw1/cache/modules/datasets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)> ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1845: UnexpectedException ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3055/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3055/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/61
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/61/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/61/comments
https://api.github.com/repos/huggingface/datasets/issues/61/events
https://github.com/huggingface/datasets/pull/61
614,607,474
MDExOlB1bGxSZXF1ZXN0NDE1MTI3MTU4
61
[Load] rename setup_module to prepare_module
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-05-08T08:54:22Z"
"2020-05-08T08:56:32Z"
"2020-05-08T08:56:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/61.diff", "html_url": "https://github.com/huggingface/datasets/pull/61", "merged_at": "2020-05-08T08:56:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/61.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/61" }
rename setup_module to prepare_module due to issues with pytests `setup_module` function. See: PR #59.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/61/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/61/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2515/comments
https://api.github.com/repos/huggingface/datasets/issues/2515/events
https://github.com/huggingface/datasets/pull/2515
924,435,447
MDExOlB1bGxSZXF1ZXN0NjczMDc3NTIx
2,515
CRD3 dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/1937386?v=4", "events_url": "https://api.github.com/users/wilsonyhlee/events{/privacy}", "followers_url": "https://api.github.com/users/wilsonyhlee/followers", "following_url": "https://api.github.com/users/wilsonyhlee/following{/other_user}", "gists_url": "https://api.github.com/users/wilsonyhlee/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wilsonyhlee", "id": 1937386, "login": "wilsonyhlee", "node_id": "MDQ6VXNlcjE5MzczODY=", "organizations_url": "https://api.github.com/users/wilsonyhlee/orgs", "received_events_url": "https://api.github.com/users/wilsonyhlee/received_events", "repos_url": "https://api.github.com/users/wilsonyhlee/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wilsonyhlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wilsonyhlee/subscriptions", "type": "User", "url": "https://api.github.com/users/wilsonyhlee" }
[]
closed
false
null
[]
null
[]
"2021-06-18T00:24:07Z"
"2021-06-21T10:18:44Z"
"2021-06-21T10:18:44Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2515.diff", "html_url": "https://github.com/huggingface/datasets/pull/2515", "merged_at": "2021-06-21T10:18:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2515.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2515" }
This PR adds additional information to the CRD3 dataset card.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2515/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/293/comments
https://api.github.com/repos/huggingface/datasets/issues/293/events
https://github.com/huggingface/datasets/pull/293
642,942,182
MDExOlB1bGxSZXF1ZXN0NDM3ODM1ODI4
293
Don't test community datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-06-22T10:15:33Z"
"2020-06-22T11:07:00Z"
"2020-06-22T11:06:59Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/293.diff", "html_url": "https://github.com/huggingface/datasets/pull/293", "merged_at": "2020-06-22T11:06:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/293.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/293" }
This PR disables testing for community datasets on aws. It should fix the CI that is currently failing.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/293/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/293/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4037/comments
https://api.github.com/repos/huggingface/datasets/issues/4037/events
https://github.com/huggingface/datasets/issues/4037
1,183,144,486
I_kwDODunzps5GhVom
4,037
Error while building documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "After some investigation, maybe the bug is in `doc-builder`.\r\n\r\nI've opened an issue there:\r\n- huggingface/doc-builder#160", "Fixed by @lewtun (thank you):\r\n- huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504" ]
"2022-03-28T09:22:44Z"
"2022-03-28T10:01:52Z"
"2022-03-28T10:00:48Z"
MEMBER
null
null
null
## Describe the bug Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4037/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4037/timeline
null
completed
false